Sample records for parity-check ldpc coded

  1. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  2. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  3. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  4. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  5. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  6. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  7. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  8. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  9. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  10. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  11. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  12. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  13. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  14. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  15. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  16. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    PubMed

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  17. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    ERIC Educational Resources Information Center

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  18. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  19. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  20. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  1. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  2. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  3. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  4. Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks

    DTIC Science & Technology

    2010-08-31

    Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6

  5. Construction of a new regular LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng

    2013-05-01

    A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

  6. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  7. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  8. Experimental research and comparison of LDPC and RS channel coding in ultraviolet communication systems.

    PubMed

    Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin

    2014-03-10

    We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.

  9. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  10. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  11. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  12. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  13. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  14. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  15. LDPC Codes--Structural Analysis and Decoding Techniques

    ERIC Educational Resources Information Center

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  16. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  17. An efficient decoding for low density parity check codes

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.

  18. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  19. The application of LDPC code in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  20. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  1. Protograph LDPC Codes for the Erasure Channel

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  2. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  3. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  4. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  5. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  6. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  7. PSEUDO-CODEWORD LANDSCAPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHERTKOV, MICHAEL; STEPANOV, MIKHAIL

    2007-01-10

    The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less

  8. PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.

    PubMed

    Djordjevic, Ivan B

    2007-04-02

    The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.

  9. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  10. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  11. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  12. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  13. High-throughput GPU-based LDPC decoding

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  14. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  15. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  16. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.

  17. Strategic and Tactical Decision-Making Under Uncertainty

    DTIC Science & Technology

    2006-01-03

    message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

  18. Pilotless Frame Synchronization Using LDPC Code Constraints

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Vissasenor, John

    2009-01-01

    A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.

  19. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  20. Experimental demonstration of the transmission performance for LDPC-coded multiband OFDM ultra-wideband over fiber system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu

    2015-01-01

    To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.

  1. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  2. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  3. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.

  4. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  5. RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2012-11-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  6. PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2013-09-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  7. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  8. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  9. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  10. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  11. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  12. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  13. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  14. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  15. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  16. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  17. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  18. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  19. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  20. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  1. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  2. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  3. A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam

    2005-01-01

    We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.

  4. Transmission over UWB channels with OFDM system using LDPC coding

    NASA Astrophysics Data System (ADS)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  5. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  6. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    PubMed Central

    Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724

  7. An LDPC decoder architecture for wireless sensor network applications.

    PubMed

    Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.

  8. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  9. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  10. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  11. Measurement Techniques for Clock Jitter

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  12. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  13. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  14. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  15. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  16. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  17. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  18. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  19. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  20. Integrated Performance of Next Generation High Data Rate Receiver and AR4JA LDPC Codec for Space Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis

    2008-01-01

    Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.

  1. Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication

    NASA Astrophysics Data System (ADS)

    Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn

    2014-11-01

    Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.

  2. Finite-connectivity spin-glass phase diagrams and low-density parity check codes.

    PubMed

    Migliorini, Gabriele; Saad, David

    2006-02-01

    We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.

  3. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  4. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    NASA Astrophysics Data System (ADS)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  5. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  6. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  7. On the optimum signal constellation design for high-speed optical transport networks.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2012-08-27

    In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.

  8. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  9. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  10. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  11. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber.

    PubMed

    Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian

    2016-05-30

    Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.

  12. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  13. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  14. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  15. Performance analysis of LDPC codes on OOK terahertz wireless channels

    NASA Astrophysics Data System (ADS)

    Chun, Liu; Chang, Wang; Jun-Cheng, Cao

    2016-02-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).

  16. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  17. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  18. Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong

    2016-12-01

    In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.

  19. MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Daoud, Omar; Alani, Omar

    This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.

  20. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  1. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  2. Future capabilities for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Berner, J. B.; Bryant, S. H.; Andrews, K. S.

    2004-01-01

    This paper will look at three new capabilities that are in different stages of development. First, turbo decoding, which provides improved telemetry performance for data rates up to about 1 Mbps, will be discussed. Next, pseudo-noise ranging will be presented. Pseudo-noise ranging has several advantages over the current sequential ranging, anmely easier operations, improved performance, and the capability to be used in a regenerative implementation on a spacecraft. Finally, Low Density Parity Check decoding will be discussed. LDPC codes can provide performance that matches or slightly exceed turbo codes, but are designed for use in the 10 Mbps range.

  3. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    NASA Astrophysics Data System (ADS)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  4. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    NASA Astrophysics Data System (ADS)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  5. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    PubMed

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  6. High-Performance CCSDS AOS Protocol Implementation in FPGA

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson

    2010-01-01

    The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.

  7. SCaN Network Ground Station Receiver Performance for Future Service Support

    NASA Technical Reports Server (NTRS)

    Estabrook, Polly; Lee, Dennis; Cheng, Michael; Lau, Chi-Wung

    2012-01-01

    Objectives: Examine the impact of providing the newly standardized CCSDS Low Density Parity Check (LDPC) codes to the SCaN return data service on the SCaN SN and DSN ground stations receivers: SN Current Receiver: Integrated Receiver (IR). DSN Current Receiver: Downlink Telemetry and Tracking (DTT) Receiver. Early Commercial-Off-The-Shelf (COTS) prototype of the SN User Service Subsystem Component Replacement (USS CR) Narrow Band Receiver. Motivate discussion of general issues of ground station hardware design to enable simple and cheap modifications for support of future services.

  8. Two-stage cross-talk mitigation in an orbital-angular-momentum-based free-space optical communication system.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2017-08-15

    We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.

  9. 500  Gb/s free-space optical transmission over strong atmospheric turbulence channels.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2016-07-15

    We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.

  10. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  11. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  12. Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage

    NASA Technical Reports Server (NTRS)

    Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.

    2010-01-01

    The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.

  13. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  14. Bilayer Protograph Codes for Half-Duplex Relay Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.

  15. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  16. High performance and cost effective CO-OFDM system aided by polar code.

    PubMed

    Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng

    2017-02-06

    A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.

  17. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  18. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  19. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  20. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  1. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  2. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  3. Design space exploration of high throughput finite field multipliers for channel coding on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.

    2012-09-01

    Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.

  4. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  5. Protecting quantum memories using coherent parity check codes

    NASA Astrophysics Data System (ADS)

    Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv

    2018-07-01

    Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2

  6. Method of Error Floor Mitigation in Low-Density Parity-Check Codes

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor)

    2014-01-01

    A digital communication decoding method for low-density parity-check coded messages. The decoding method decodes the low-density parity-check coded messages within a bipartite graph having check nodes and variable nodes. Messages from check nodes are partially hard limited, so that every message which would otherwise have a magnitude at or above a certain level is re-assigned to a maximum magnitude.

  7. Analysis of soft-decision FEC on non-AWGN channels.

    PubMed

    Cho, Junho; Xie, Chongjin; Winzer, Peter J

    2012-03-26

    Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.

  8. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  9. Joint Carrier-Phase Synchronization and LDPC Decoding

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban

    2009-01-01

    A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).

  10. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  11. Concurrent error detecting codes for arithmetic processors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.

  12. Direct-detection Free-space Laser Transceiver Test-bed

    NASA Technical Reports Server (NTRS)

    Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli; hide

    2008-01-01

    NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.

  13. Weight-4 Parity Checks on a Surface Code Sublattice with Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Takita, Maika; Corcoles, Antonio; Magesan, Easwar; Bronn, Nicholas; Hertzberg, Jared; Gambetta, Jay; Steffen, Matthias; Chow, Jerry

    We present a superconducting qubit quantum processor design amenable to the surface code architecture. In such architecture, parity checks on the data qubits, performed by measuring their X- and Z- syndrome qubits, constitute a critical aspect. Here we show fidelities and outcomes of X- and Z-parity measurements done on a syndrome qubit in a full plaquette consisting of one syndrome qubit coupled via bus resonators to four code qubits. Parities are measured after four code qubits are prepared into sixteen initial states in each basis. Results show strong dependence on ZZ between qubits on the same bus resonators. This work is supported by IARPA under Contract W911NF-10-1-0324.

  14. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  15. Evaluation of four-dimensional nonbinary LDPC-coded modulation for next-generation long-haul optical transport networks.

    PubMed

    Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B

    2012-04-09

    Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.

  16. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  17. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  18. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  19. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  20. Design of ACM system based on non-greedy punctured LDPC codes

    NASA Astrophysics Data System (ADS)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  1. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  2. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  3. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    PubMed

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  4. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  5. FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2015-06-01

    In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.

  6. The small stellated dodecahedron code and friends.

    PubMed

    Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M

    2018-07-13

    We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.

  7. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  8. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  9. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    PubMed

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  10. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  11. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †

    PubMed Central

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-01-01

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675

  12. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.

    PubMed

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-06-29

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.

  13. Ultra high speed optical transmission using subcarrier-multiplexed four-dimensional LDPC-coded modulation.

    PubMed

    Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted

    2010-09-13

    We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).

  14. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    NASA Astrophysics Data System (ADS)

    Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe

    2015-05-01

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.

  15. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  16. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  17. DVB-S2 Experiment over NASA's Space Network

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.

    2017-01-01

    The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.

  18. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  19. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  20. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  1. Performance Analysis of a JTIDS/Link-16-type Waveform Transmitted over Slow, Flat Nakagami Fading Channels in the Presence of Narrowband Interference

    DTIC Science & Technology

    2008-12-01

    The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps

  2. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  3. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  4. Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)

    NASA Technical Reports Server (NTRS)

    Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.

    2012-01-01

    The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.

  5. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  6. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  7. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  8. Channel coding for underwater acoustic single-carrier CDMA communication system

    NASA Astrophysics Data System (ADS)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  9. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    NASA Astrophysics Data System (ADS)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  10. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    NASA Astrophysics Data System (ADS)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  11. Development and characterisation of FPGA modems using forward error correction for FSOC

    NASA Astrophysics Data System (ADS)

    Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried

    2016-05-01

    In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.

  12. On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Xu, Lei; Wang, Ting

    2010-12-01

    Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.

  13. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  14. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  15. 428-Gb/s single-channel coherent optical OFDM transmission over 960-km SSMF with constellation expansion and LDPC coding.

    PubMed

    Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William

    2010-08-02

    High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.

  16. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  17. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  18. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  19. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il

    2014-08-15

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  20. Artificial Intelligence in Space Platforms.

    DTIC Science & Technology

    1984-12-01

    technician would be resposible for filling the data base with DSCS particular information concerning thrusters, 90 b...fault conditions and performing predefined self -preserving (entering a safe-hold stat9) switching actions. Is capable of storing contingency or...on-board for syntactical errors (parity, sign, logic, time). Uses coding or other self -checking techniques to minimize the effects of Internally

  1. Error-correcting codes on scale-free networks

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Hoon; Ko, Young-Jo

    2004-06-01

    We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.

  2. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  3. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.

    PubMed

    Han, Changcai; Yang, Jinsheng

    2017-10-30

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.

  4. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network

    PubMed Central

    Han, Changcai; Yang, Jinsheng

    2017-01-01

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155

  5. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  6. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    PubMed

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  7. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  8. A Direct Mapping of Max k-SAT and High Order Parity Checks to a Chimera Graph

    PubMed Central

    Chancellor, N.; Zohren, S.; Warburton, P. A.; Benjamin, S. C.; Roberts, S.

    2016-01-01

    We demonstrate a direct mapping of max k-SAT problems (and weighted max k-SAT) to a Chimera graph, which is the non-planar hardware graph of the devices built by D-Wave Systems Inc. We further show that this mapping can be used to map a similar class of maximum satisfiability problems where the clauses are replaced by parity checks over potentially large numbers of bits. The latter is of specific interest for applications in decoding for communication. We discuss an example in which the decoding of a turbo code, which has been demonstrated to perform near the Shannon limit, can be mapped to a Chimera graph. The weighted max k-SAT problem is the most general class of satisfiability problems, so our result effectively demonstrates how any satisfiability problem may be directly mapped to a Chimera graph. Our methods faithfully reproduce the low energy spectrum of the target problems, so therefore may also be used for maximum entropy inference. PMID:27857179

  9. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  10. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  11. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  12. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  13. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  14. Nonlinear Demodulation and Channel Coding in EBPSK Scheme

    PubMed Central

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281

  15. Nonlinear demodulation and channel coding in EBPSK scheme.

    PubMed

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.

  16. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  17. Design of a Microprogram Control Unit with Concurrent Error Detection.

    DTIC Science & Technology

    1984-08-01

    I fxoot Office of Naval Research N/A N00039-80-C-0556 ta. ADDRESS (City. St.. and ZIP Cod 10. SOURCE OF FUNOING N0. -PROGRAM PROJECT TASK WORK UNIT...However, the CED concept is mainly applied to various codes data transmission, and simple functional units, such as arithmetic units. Little work has...been done in the control unit area. Previous work is primarily in the use of clanical self-checking circuits, using bit slicin& parity, and m-out-of-n

  18. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  19. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  20. Joint Schemes for Physical Layer Security and Error Correction

    ERIC Educational Resources Information Center

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  1. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)

    DTIC Science & Technology

    2014-10-01

    SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test

  2. Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations

    DTIC Science & Technology

    2017-11-14

    1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave

  3. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  4. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    PubMed

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  5. A framework for employing femtosatellites in planetary science missions, including a proposed mission concept for Titan

    NASA Astrophysics Data System (ADS)

    Perez, Tracie Renea Conn

    Over the past 15 years, there has been a growing interest in femtosatellites, a class of tiny satellites having mass less than 100 grams. Research groups from Peru, Spain, England, Canada, and the United States have proposed femtosat designs and novel mission concepts for them. In fact, Peru made history in 2013 by releasing the first - and still only - femtosat tracked from LEO. However, femtosatellite applications in interplanetary missions have yet to be explored in detail. An interesting operations concept would be for a space probe to release numerous femtosatellites into orbit around a planetary object of interest, thereby augmenting the overall data collection capability of the mission. A planetary probe releasing hundreds of femtosats could complete an in-situ, simultaneous 3D mapping of a physical property of interest, achieving scientific investigations not possible for one probe operating alone. To study the technical challenges associated with such a mission, a conceptual mission design is proposed where femtosats are deployed from a host satellite orbiting Titan. The conceptual mission objective is presented: to study Titan's dynamic atmosphere. Then, the design challenges are addressed in turn. First, any science payload measurements that the femtosats provide are only useful if their corresponding locations can be determined. Specifically, what's required is a method of position determination for femtosatellites operating beyond Medium Earth Orbit and therefore beyond the help of GPS. A technique is presented which applies Kalman filter techniques to Doppler shift measurements, allowing for orbit determination of the femtosats. Several case studies are presented demonstrating the usefulness of this approach. Second, due to the inherit power and computational limitations in a femtosatellite design, establishing a radio link between each chipsat and the mothersat will be difficult. To provide a mathematical gain, a particular form of forward error correction (FEC) method called low-density parity-check (LDPC) codes is recommended. A specific low-complexity encoder, and accompanying decoder, have been implemented in the open-source software radio library, GNU Radio. Simulation results demonstrating bit error rate (BER) improvement are presented. Hardware for implementing the LDPC methods in a benchtop test are described and future work on this topic is suggested. Third, the power and spatial constraints of femtosatellite designs likely restrict the payload to one or two sensors. Therefore, it is desired to extract as much useful scientific data as possible from secondary sources, such as radiometric data. Estimating the atmospheric density model from different measurement sources is simulated; results are presented. The overall goal for this effort is to advance the field of miniature spacecraft-based technology and to highlight the advantages of using femtosatellites in future planetary exploration missions. By addressing several subsystem design challenges in this context, such a femtosat mission concept is one step closer to being feasible.

  6. Research on Formation of Microsatellite Communication with Genetic Algorithm

    PubMed Central

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796

  7. Research on formation of microsatellite communication with genetic algorithm.

    PubMed

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  8. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  9. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, David D.; Napolitano, Jr., Leonard M.; Redinbo, G. Robert; Shreeve, William O.

    1994-01-01

    An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

  10. Fault-tolerant corrector/detector chip for high-speed data processing

    DOEpatents

    Andaleon, D.D.; Napolitano, L.M. Jr.; Redinbo, G.R.; Shreeve, W.O.

    1994-03-01

    An internally fault-tolerant data error detection and correction integrated circuit device and a method of operating same is described. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented. 8 figures.

  11. A Note on a Sampling Theorem for Functions over GF(q)n Domain

    NASA Astrophysics Data System (ADS)

    Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.

  12. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  13. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  14. More box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    A new investigation shows that, starting from the BCH (21,15;3) code represented as a 7 x 3 matrix and adding a row and column to add even parity, one obtains an 8 x 4 matrix (32,15;8) code. An additional dimension is obtained by specifying odd parity on the rows and even parity on the columns, i.e., adjoining to the 8 x 4 matrix, the matrix, which is zero except for the fourth column (of all ones). Furthermore, any seven rows and three columns will form the BCH (21,15;3) code. This box code has the same weight structure as the quadratic residue and BCH codes of the same dimensions. Whether there exists an algebraic isomorphism to either code is as yet unknown.

  15. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  16. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  17. Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors

    DTIC Science & Technology

    2012-02-29

    filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider

  18. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  19. System on a Chip Real-Time Emulation (SOCRE)

    DTIC Science & Technology

    2006-09-01

    code ) i Table of Contents Preface...emulation platform included LDPC decoders, A/V and radio applications Port BEE flow to Emulation Platforms, SOC Technologies One of the key tasks of the...Once the design has been described within Simulink, the designer runs the BEE design flow within Matlab using the bee_xps interface. At this point

  20. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  1. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  2. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  3. Irreducible normalizer operators and thresholds for degenerate quantum codes with sublinear distances

    NASA Astrophysics Data System (ADS)

    Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.

    2015-03-01

    We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.

  4. Simulations of linear and Hamming codes using SageMath

    NASA Astrophysics Data System (ADS)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  5. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  6. Demonstration of Weight-Four Parity Measurements in the Surface Code Architecture.

    PubMed

    Takita, Maika; Córcoles, A D; Magesan, Easwar; Abdo, Baleegh; Brink, Markus; Cross, Andrew; Chow, Jerry M; Gambetta, Jay M

    2016-11-18

    We present parity measurements on a five-qubit lattice with connectivity amenable to the surface code quantum error correction architecture. Using all-microwave controls of superconducting qubits coupled via resonators, we encode the parities of four data qubit states in either the X or the Z basis. Given the connectivity of the lattice, we perform a full characterization of the static Z interactions within the set of five qubits, as well as dynamical Z interactions brought along by single- and two-qubit microwave drives. The parity measurements are significantly improved by modifying the microwave two-qubit gates to dynamically remove nonideal Z errors.

  7. Information Theory, Inference and Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Mackay, David J. C.

    2003-10-01

    Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

  8. The development of product parity sensitivity in children with mathematics learning disability and in typical achievers.

    PubMed

    Rotem, Avital; Henik, Avishai

    2013-02-01

    Parity helps us determine whether an arithmetic equation is true or false. The current research examines the development of sensitivity to parity cues in multiplication in typically achieving (TA) children (grades 2, 3, 4 and 6) and in children with mathematics learning disabilities (MLD, grades 6 and 8), via a verification task. In TA children the onset of parity sensitivity was observed at the beginning of 3rd grade, whereas in children with MLD it was documented only in 8th grade. These results suggest that children with MLD develop parity aspects of number sense, though later than TA children. To check the plausibility of equations, children used mainly the multiplication parity rule rather than familiarity with even products. Similar to observations in adults, parity sensitivity was largest for problems with two even operands, moderate for problems with one even and one odd operand, and smallest for problems with two odd operands. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. 16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.

    PubMed

    Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2012-05-21

    We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.

  10. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    PubMed

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  11. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  12. The timing of antenatal care initiation and the content of care in Sindh, Pakistan.

    PubMed

    Agha, Sohail; Tappis, Hannah

    2016-07-27

    Policymakers and program planners consider antenatal care (ANC) coverage to be a primary measure of improvements in maternal health. Yet, evidence from multiple countries indicates that ANC coverage has no necessary relationship with the content of services provided. This study examines the relationship between the timing of the first ANC check-up, a potential predictor of the content of services, and the provision of WHO recommended services to women during their pregnancy. The study uses data from a representative household survey of Sindh with a sample comprising of 4,000 women aged 15-49 who had had a live birth in the two years before the survey. The survey obtained information about the elements of care provided during pregnancy, the timing of the first ANC check-up, the number of ANC visits made during the last pregnancy and women's socio-economic and demographic characteristics. Bivariate analysis was conducted to examine the relationship between the proportion of women who receive six WHO recommended services and the timing of their first ANC check-up. Multivariate analysis was conducted to identify predictors of the number of elements of care provided. While most women in Sindh (87 %) receive an ANC check-up, its timing varies by parity, education and household wealth. The median time to the first ANC check-up was 3 months for women in the richest and 7 months for women in the poorest wealth quintiles. In multivariate analysis, wealth, education, parity and age at marriage were significant predictors of the number of elements of care provided. Women who received an early ANC check-up were much more likely to receive WHO recommended services than other women, independent of a range of socio-economic and demographic variables and independent of the number of ANC visits made during pregnancy. In Sindh, the timing of the first ANC check-up has an independent effect on the content of care provided to pregnant women. While it is extremely important that providers are adequately trained and motivated to provide the WHO recommended standards of care, these findings suggest that motivating women to make an early first ANC check-up may be another mechanism through which the quality of care provided may be improved. Such a focus is most likely to benefit the poorest, least educated and highest parity women. Based on these findings, we recommend that routine data collected at health facilities in Pakistan should include the month of pregnancy at the time of the first ANC check-up.

  13. Measurement of the Parity-Violating directional Gamma-ray Asymmetry in Polarized Neutron Capture on ^35Cl

    NASA Astrophysics Data System (ADS)

    Fomin, Nadia

    2012-03-01

    The NPDGamma experiment aims to measure the parity-odd correlation between the neutron spin and the direction of the emitted photon in neutron-proton capture. A parity violating asymmetry (to be measured to 10-8) from this process can be directly related to the strength of the hadronic weak interaction between nucleons. As part of the commissioning runs on the Fundamental Neutron Physics beamline at the Spallation Neutron Source at ORNL, the gamma-ray asymmetry from the parity-violating capture of cold neutrons on ^35Cl was measured, primarily to check for systematic effects and false asymmtries. The current precision from existing world measurements on this asymmetry is at the level of 10-6 and we believe we can improve it. The analysis methodology as well as preliminary results will be presented.

  14. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  15. Structure of the Odd-Odd Nucleus {sup 188}Re

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balodis, M.; Berzins, J.; Simonova, L.

    2009-01-28

    Thermal neutron capture gamma-ray spectra for {sup 187}Re(n,{gamma}){sup 188}Re reaction were measured. Singles and coincidence spectra were detected in order to develop the level scheme. The evaluation is in progress, of which the first results are obtained from the analysis of coincidence spectra, allowing to check the level scheme below 500 keV excitation energy. Seven low-energy negative parity bands are developed in order to find better energies for rotational levels. With a good confidence, a few positive parity bands are developed as well. Rotor plus two quasiparticle model calculations, employing effective matrix element method are performed for the system ofmore » six negative parity rotational bands.« less

  16. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  17. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  18. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  19. Landsat Data Continuity Mission (LDCM) space to ground mission data architecture

    USGS Publications Warehouse

    Nelson, Jack L.; Ames, J.A.; Williams, J.; Patschke, R.; Mott, C.; Joseph, J.; Garon, H.; Mah, G.

    2012-01-01

    The Landsat Data Continuity Mission (LDCM) is a scientific endeavor to extend the longest continuous multi-spectral imaging record of Earth's land surface. The observatory consists of a spacecraft bus integrated with two imaging instruments; the Operational Land Imager (OLI), built by Ball Aerospace & Technologies Corporation in Boulder, Colorado, and the Thermal Infrared Sensor (TIRS), an in-house instrument built at the Goddard Space Flight Center (GSFC). Both instruments are integrated aboard a fine-pointing, fully redundant, spacecraft bus built by Orbital Sciences Corporation, Gilbert, Arizona. The mission is scheduled for launch in January 2013. This paper will describe the innovative end-to-end approach for efficiently managing high volumes of simultaneous realtime and playback of image and ancillary data from the instruments to the reception at the United States Geological Survey's (USGS) Landsat Ground Network (LGN) and International Cooperator (IC) ground stations. The core enabling capability lies within the spacecraft Command and Data Handling (C&DH) system and Radio Frequency (RF) communications system implementation. Each of these systems uniquely contribute to the efficient processing of high speed image data (up to 265Mbps) from each instrument, and provide virtually error free data delivery to the ground. Onboard methods include a combination of lossless data compression, Consultative Committee for Space Data Systems (CCSDS) data formatting, a file-based/managed Solid State Recorder (SSR), and Low Density Parity Check (LDPC) forward error correction. The 440 Mbps wideband X-Band downlink uses Class 1 CCSDS File Delivery Protocol (CFDP), and an earth coverage antenna to deliver an average of 400 scenes per day to a combination of LGN and IC ground stations. This paper will also describe the integrated capabilities and processes at the LGN ground stations for data reception using adaptive filtering, and the mission operations approach fro- the LDCM Mission Operations Center (MOC) to perform the CFDP accounting, file retransmissions, and management of the autonomous features of the SSR.

  20. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  1. Mechanisms of lectin and antibody-dependent polymorphonuclear leukocyte-mediated cytolysis.

    PubMed

    Tsunawaki, S; Ikenami, M; Mizuno, D; Yamazaki, M

    1983-04-01

    The mechanisms of tumor lysis by polymorphonuclear leukocytes (PMNs) were investigated. In antibody-dependent PMN-mediated cytolysis (ADPC), sensitized tumor cells were specifically lysed via Fc receptors on PMNs. On the other hand, lectin-dependent PMN-mediated cytolysis (LDPC) caused nonspecific lysis of several murine tumors after recognition of carbohydrate moieties on the cell membrane of both PMNs and tumor cells. Both ADPC and LDPC depended on glycolysis, and cytotoxicity was mediated by reactive oxygen species; LDPC was dependent on superoxide and ADPC on the myeloperoxidase system. The participation of reactive oxygen species in PMN cytotoxicity was also demonstrated by pharmacological triggering with phorbol myristate acetate. These results indicate that reactive oxygen species have an important role In tumor killing by PMNs and that ADPC and LDPC have partly different cytolytic processes as well as different recognition steps.

  2. An improved control mode for the ping-pong protocol operation in imperfect quantum channels

    NASA Astrophysics Data System (ADS)

    Zawadzki, Piotr

    2015-07-01

    Quantum direct communication (QDC) can bring confidentiality of sensitive information without any encryption. A ping-pong protocol, a well-known example of entanglement-based QDC, offers asymptotic security in a perfect quantum channel. However, it has been shown (Wójcik in Phys Rev Lett 90(15):157901, 2003. doi:10.1103/PhysRevLett.90.157901) that it is not secure in the presence of losses. Moreover, legitimate parities cannot rely on dense information coding due to possible undetectable eavesdropping even in the perfect setting (Pavičić in Phys Rev A 87(4):042326, 2013. doi:10.1103/PhysRevA.87.042326). We have identified the source of the above-mentioned weaknesses in the incomplete check of the EPR pair coherence. We propose an improved version of the control mode, and we discuss its relation to the already-known attacks that undermine the QDC security. It follows that the new control mode detects these attacks with high probability and independently on a quantum channel type. As a result, an asymptotic security of the QDC communication can be maintained for imperfect quantum channels, also in the regime of dense information coding.

  3. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  4. NASA Tech Briefs, October 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.

  5. 42 CFR 457.480 - Preexisting condition exclusions and relation to other laws.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the Internal Revenue Code of 1986. (3) Mental Health Parity Act (MHPA). Health benefits coverage under... 1996 regarding parity in the application of annual and lifetime dollar limits to mental health benefits... 42 Public Health 4 2012-10-01 2012-10-01 false Preexisting condition exclusions and relation to...

  6. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  7. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  8. 50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...

  9. Optical bias selector based on a multilayer a-SiC:H optical filter

    NASA Astrophysics Data System (ADS)

    Vieira, M.; Vieira, M. A.; Louro, P.

    2017-08-01

    In this paper we present a MUX/DEMUX device based on a multilayer a-SiC:H optical filter that requires nearultraviolet steady state optical switches to select desired wavelengths in the visible range. Spectral response and transmittance measurements are presented and show the feasibility of tailoring the wavelength and bandwidth of a polychromatic mixture of different wavelengths. The selector filter is realized by using a two terminal double pi'n/pin a-SiC:H photodetector. Five visible communication channels are transmitted together, each one with a specific bit sequence. The combined optical signal is analyzed by reading out the photocurrent, under near-UV front steady state background. Data shows that 25 current levels are detected and corresponds to the thirty-two on/off possible states. The proximity of the magnitude of consecutive levels causes occasional errors in the decoded information. To minimize the errors, four parity bit are generated and stored along with the data word. The parity of the word is checked after reading the word to detect and correct the transmitted data. Results show that the background works as a selector in the visible range, shifting the sensor sensitivity and together with the parity check bits allows the identification and decoding of the different input channels. A transmission capability of 60 kbps using the generated codeword was achieved. An optoeletronic model gives insight on the system physics.

  10. Description of the L1C signal

    USGS Publications Warehouse

    Betz, J.W.; Blanco, M.A.; Cahn, C.R.; Dafesh, P.A.; Hegarty, C.J.; Hudnut, K.W.; Kasemsri, V.; Keegan, R.; Kovach, K.; Lenahan, L.S.; Ma, H.H.; Rushanan, J.J.; Sklar, D.; Stansell, T.A.; Wang, C.C.; Yi, S.K.

    2006-01-01

    Detailed design of the modernized LI civil signal (L1C) signal has been completed, and the resulting draft Interface Specification IS-GPS-800 was released in Spring 2006. The novel characteristics of the optimized L1C signal design provide advanced capabilities while offering to receiver designers considerable flexibility in how to use these capabilities. L1C provides a number of advanced features, including: 75% of power in a pilot component for enhanced signal tracking, advanced Weilbased spreading codes, an overlay code on the pilot that provides data message synchronization, support for improved reading of clock and ephemeris by combining message symbols across messages, advanced forward error control coding, and data symbol interleaving to combat fading. The resulting design offers receiver designers the opportunity to obtain unmatched performance in many ways. This paper describes the design of L1C. A summary of LIC's background and history is provided. The signal description then proceeds with the overall signal structure consisting of a pilot component and a carrier component. The new L1C spreading code family is described, along with the logic used for generating these spreading codes. Overlay codes on the pilot channel are also described, as is the logic used for generating the overlay codes. Spreading modulation characteristics are summarized. The data message structure is also presented, showing the format for providing time, ephemeris, and system data to users, along with features that enable receivers to perform code combining. Encoding of rapidly changing time bits is described, as are the Low Density Parity Check codes used for forward error control of slowly changing time bits, clock, ephemeris, and system data. The structure of the interleaver is also presented. A summary of L 1C's unique features and their benefits is provided, along with a discussion of the plan for L1C implementation.

  11. Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1994-01-01

    Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.

  12. Bar code-based pre-transfusion check in pre-operative autologous blood donation.

    PubMed

    Ohsaka, Akimichi; Furuta, Yoshiaki; Ohsawa, Toshiya; Kobayashi, Mitsue; Abe, Katsumi; Inada, Eiichi

    2010-10-01

    The objective of this study was to demonstrate the feasibility of a bar code-based identification system for the pre-transfusion check at the bedside in the setting of pre-operative autologous blood donation (PABD). Between July 2003 and December 2008 we determined the compliance rate and causes of failure of electronic bedside checking for PABD transfusion. A total of 5627 (9% of all transfusions) PABD units were administered without a single mistransfusion. The overall rate of compliance with electronic checking was 99%. The bar code-based identification system was applicable to the pre-transfusion check for PABD transfusion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Evaluation of H.264/AVC over IEEE 802.11p vehicular networks

    NASA Astrophysics Data System (ADS)

    Rozas-Ramallal, Ismael; Fernández-Caramés, Tiago M.; Dapena, Adriana; García-Naya, José Antonio

    2013-12-01

    The capacity of vehicular networks to offer non-safety services, like infotainment applications or the exchange of multimedia information between vehicles, have attracted a great deal of attention to the field of Intelligent Transport Systems (ITS). In particular, in this article we focus our attention on IEEE 802.11p which defines enhancements to IEEE 802.11 required to support ITS applications. We present an FPGA-based testbed developed to evaluate H.264/AVC (Advanced Video Coding) video transmission over vehicular networks. The testbed covers some of the most common situations in vehicle-to-vehicle and roadside-to-vehicle communications and it is highly flexible, allowing the performance evaluation of different vehicular standard configurations. We also show several experimental results to illustrate the quality obtained when H.264/AVC encoded video is transmitted over IEEE 802.11p networks. The quality is measured considering two important parameters: the percentage of recovered group of pictures and the frame quality. In order to improve performance, we propose to substitute the convolutional channel encoder used in IEEE 802.11p for a low-density parity-check code encoder. In addition, we suggest a simple strategy to decide the optimum number of iterations needed to decode each packet received.

  14. Complementary code and digital filtering for detection of weak VHF radar signals from the mesoscale. [SOUSY-VHF radar, Harz Mountains, Germany

    NASA Technical Reports Server (NTRS)

    Schmidt, G.; Ruster, R.; Czechowsky, P.

    1983-01-01

    The SOUSY-VHF-Radar operates at a frequency of 53.5 MHz in a valley in the Harz mountains, Germany, 90 km from Hanover. The radar controller, which is programmed by a 16-bit computer holds 1024 program steps in core and controls, via 8 channels, the whole radar system: in particular the master oscillator, the transmitter, the transmit-receive-switch, the receiver, the analog to digital converter, and the hardware adder. The high-sensitivity receiver has a dynamic range of 70 dB and a video bandwidth of 1 MHz. Phase coding schemes are applied, in particular for investigations at mesospheric heights, in order to carry out measurements with the maximum duty cycle and the maximum height resolution. The computer takes the data from the adder to store it in magnetic tape or disc. The radar controller is programmed by the computer using simple FORTRAN IV statements. After the program has been loaded and the computer has started the radar controller, it runs automatically, stopping at the program end. In case of errors or failures occurring during the radar operation, the radar controller is shut off caused either by a safety circuit or by a power failure circuit or by a parity check system.

  15. External cephalic version for singleton breech presentation: proposal of a practical check-list for obstetricians.

    PubMed

    Indraccolo, U; Graziani, C; Di Iorio, R; Corona, G; Bonito, M; Indraccolo, S R

    2015-07-01

    External cephalic version (ECV) for breech presentation is not routinely performed by obstetricians in many clinical settings. The aim of this work is to assess to what extent the factors involved in performing ECV are relevant for the success and safety of ECV, in order to propose a practical check-list for assessing the feasibility of ECV. Review of 214 references. Factors involved in the success and risks of ECV (feasibility of ECV) were extracted and were scored in a semi-quantitative way according to textual information, type of publication, year of publication, number of cases. Simple conjoint analysis was used to describe the relevance found for each factor. Parity has the pivotal role in ECV feasibility (relevance 16.6%), followed by tocolysis (10.8%), gestational age (10.6%), amniotic fluid volume (4.7%), breech variety (1.9%), and placenta location (1.7%). Other factors with estimated relevance around 0 (regional anesthesia, station, estimated fetal weight, fetal position, obesity/BMI, fetal birth weight, duration of manoeuvre/number of attempts) have some role in the feasibility of ECV. Yet other factors, with negative values of estimated relevance, have even less importance. From a logical interpretation of the relevance of each factor assessed, ECV should be proposed with utmost prudence if a stringent check-list is followed. Such a check-list should take into account: parity, tocolytic therapy, gestational age, amniotic fluid volume, breech variety, placenta location, regional anesthesia, breech engagement, fetal well-being, uterine relaxation, fetal size, fetal position, fetal head grasping capability and fetal turning capability.

  16. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  17. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  18. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  19. Pedestal-to-Wall 3D Fluid Transport Simulations on DIII-D

    DOE PAGES

    Lore, Jeremy D.; Wolfmeister, Alexis Briesemeister; Ferraro, Nathaniel M.; ...

    2017-03-30

    The 3D fluid-plasma edge transport code EMC3-EIRENE is used to test several magnetic field models with and without plasma response against DIII-D experimental data for even and odd-parity n=3 magnetic field perturbations. The field models include ideal and extended MHD equilibria, and the vacuum approximation. Plasma response is required to reduce the stochasticity in the pedestal region for even-parity fields, however too much screening suppresses the measured splitting of the downstream T e profile. Odd-parity perturbations result in weak tearing and only small additional peaks in the downstream measurements. In this case plasma response is required to increase the sizemore » of the lobe structure. Finally, no single model is able to simultaneously reproduce the upstream and downstream characteristics for both odd and even-parity perturbations.« less

  20. Improved classical and quantum random access codes

    NASA Astrophysics Data System (ADS)

    Liabøtrø, O.

    2017-05-01

    A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.

  1. Two-dimensional solitons in conservative and parity-time-symmetric triple-core waveguides with cubic-quintic nonlinearity

    NASA Astrophysics Data System (ADS)

    Feijoo, David; Zezyulin, Dmitry A.; Konotop, Vladimir V.

    2015-12-01

    We analyze a system of three two-dimensional nonlinear Schrödinger equations coupled by linear terms and with the cubic-quintic (focusing-defocusing) nonlinearity. We consider two versions of the model: conservative and parity-time (PT ) symmetric. These models describe triple-core nonlinear optical waveguides, with balanced gain and losses in the PT -symmetric case. We obtain families of soliton solutions and discuss their stability. The latter study is performed using a linear stability analysis and checked with direct numerical simulations of the evolutional system of equations. Stable solitons are found in the conservative and PT -symmetric cases. Interactions and collisions between the conservative and PT -symmetric solitons are briefly investigated, as well.

  2. Nonlocal hyperconcentration on entangled photons using photonic module system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Cong; Wang, Tie-Jun; Mi, Si-Chen

    Entanglement distribution will inevitably be affected by the channel and environment noise. Thus distillation of maximal entanglement nonlocally becomes a crucial goal in quantum information. Here we illustrate that maximal hyperentanglement on nonlocal photons could be distilled using the photonic module and cavity quantum electrodynamics, where the photons are simultaneously entangled in polarization and spatial-mode degrees of freedom. The construction of the photonic module in a photonic band-gap structure is presented, and the operation of the module is utilized to implement the photonic nondestructive parity checks on the two degrees of freedom. We first propose a hyperconcentration protocol using twomore » identical partially hyperentangled initial states with unknown coefficients to distill a maximally hyperentangled state probabilistically, and further propose a protocol by the assistance of an ancillary single photon prepared according to the known coefficients of the initial state. In the two protocols, the total success probability can be improved greatly by introducing the iteration mechanism, and only one of the remote parties is required to perform the parity checks in each round of iteration. Estimates on the system requirements and recent experimental results indicate that our proposal is realizable with existing or near-further technologies.« less

  3. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  4. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  5. Implementation of mental health parity: lessons from california.

    PubMed

    Rosenbach, Margo L; Lake, Timothy K; Williams, Susan R; Buck, Jeffrey A

    2009-12-01

    This article reports the experiences of health plans, providers, and consumers with California's mental health parity law and discusses implications for implementation of the 2008 federal parity law. This study used a multimodal data collection approach to assess the first five years of California's parity implementation (from 2000 to 2005). Telephone interviews were conducted with 68 state-level stakeholders, and in-person interviews were conducted with 77 community-based stakeholders. Six focus groups included 52 providers, and six included 32 consumers. A semistructured interview protocol was used. Interview notes and transcripts were coded to facilitate analysis. Health plans eliminated differential benefit limits and cost-sharing requirements for certain mental disorders to comply with the law, and they used managed care to control costs. In response to concerns about access to and quality of care, the state expanded oversight of health plans, issuing access-to-care regulations and conducting focused studies. California's parity law applied to a limited list of psychiatric diagnoses. Health plan executives said they spent considerable resources clarifying which diagnoses were covered at parity levels and concluded that the limited diagnosis list was unnecessary with managed care. Providers indicated that the diagnosis list had unintended consequences, including incentives to assign a more severe diagnosis that would be covered at parity levels, rather than a less severe diagnosis that would not be covered at such levels. The lack of consumer knowledge about parity was widely acknowledged, and consumers in the focus groups requested additional information about parity. Experiences in California suggest that implementation of the 2008 federal parity law should include monitoring health plan performance related to access and quality, in addition to monitoring coverage and costs; examining the breadth of diagnoses covered by health plans; and mounting a campaign to educate consumers about their insurance benefits.

  6. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  7. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  8. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  9. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  10. Bad-good constraints on a polarity correspondence account for the spatial-numerical association of response codes (SNARC) and markedness association of response codes (MARC) effects.

    PubMed

    Leth-Steensen, Craig; Citta, Richie

    2016-01-01

    Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important constraint on the presumed generality of the polarity correspondence account. On the other hand, the presence of robust MARC effects for "bad" and "good" but not "left" and "right" vocal responses is consistent with the view that such effects are due to conceptual associations between semantic codes for odd-even and bad-good (but not necessarily left-right).

  11. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  12. Ptolemy Coding Style

    DTIC Science & Technology

    2014-09-05

    shell script that checks Java code and prints out an alphabetical list of unrec- ognized spellings. It properly handles namesWithEmbeddedCapitalization...local/bin/ispell. To run this script, type $PTII/util/testsuite/ptspell *.java • testsuite/chkjava is a shell script for checking various other...best if the svn:native property is set. Below is how to check the values for a file named README.txt: bash-3.2$ svn proplist README.txt Properties on

  13. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  14. Code development for ships -- A demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyub, B.; Mansour, A.E.; White, G.

    1996-12-31

    A demonstration summary of a reliability-based structural design code for ships is presented for two ship types, a cruiser and a tanker. For both ship types, code requirements cover four failure modes: hull girder bulking, unstiffened plate yielding and buckling, stiffened plate buckling, and fatigue of critical detail. Both serviceability and ultimate limit states are considered. Because of limitation on the length, only hull girder modes are presented in this paper. Code requirements for other modes will be presented in future publication. A specific provision of the code will be a safety check expression. The design variables are to bemore » taken at their nominal values, typically values in the safe side of the respective distributions. Other safety check expressions for hull girder failure that include load combination factors, as well as consequence of failure factors, are considered. This paper provides a summary of safety check expressions for the hull girder modes.« less

  15. Advanced Dynamic Anthropomorphic Manikin (ADAM) Final Design Report

    DTIC Science & Technology

    1990-03-01

    an ejection sequence, the human body is subjected to numerous dynamic loadings from the catapult and sustaining rocket, as well as from wind blast. In...ML12S33 CMF’I.B #1.33, (A6) Selection = 3? BNC.S MU2S34 No - check for 4 LEA.L F’ARMSG,A2 Else display Parity prompt MOVC..W PARCT,D2 BSFR DI.; PMSG CLR.W

  16. A-7 Aloft Demonstration Flight Test Plan

    DTIC Science & Technology

    1975-09-01

    6095979 72A2130 Power Supply 12 VDC 6095681 72A29 NOC 72A30 ALOFT ASCU Adapter Set 72A3100 ALOFT ASCU Adapter L20-249-1 72A3110 Page assy L Bay and ASCU ...checks will also be performed for each of the following: 3.1.2.1.1 ASCU Codes. Verification will be made that all legal ASCU codes are recognized and...invalid codes inhibit attack mode. A check will also be made to verify that the ASCU codes for pilot-option weapons A-25 enable the retarded weapons

  17. Prototype data terminal: Multiplexer/demultiplexer

    NASA Technical Reports Server (NTRS)

    Leck, D. E.; Goodwin, J. E.

    1972-01-01

    The design and operation of a quad redundant data terminal and a multiplexer/demultiplexer (MDU) design are described. The most unique feature is the design of the quad redundant data terminal. This is one of the few designs where the unit is fail/op, fail/op, fail/safe. Laboratory tests confirm that the unit will operate satisfactorily with the failure of three out of four channels. Although the design utilizes state-of-the-art technology. The waveform error checks, the voting techniques, and the parity bit checks are believed to be used in unique configurations. Correct word selection routines are also novel, if not unique. The MDU design, while not redundant, utilizes, the latest state-of-the-art advantages of light couplers and integrated circuit amplifiers.

  18. English-Afrikaans Intrasentential Code Switching: Testing a Feature Checking Account

    ERIC Educational Resources Information Center

    van Dulm, Ondene

    2009-01-01

    The work presented here aims to account for the structure of intrasentential code switching between English and Afrikaans within the framework of feature checking theory, a theory associated with minimalist syntax. Six constructions in which verb position differs between English and Afrikaans were analysed in terms of differences in the strength…

  19. Baryon currents in QCD with compact dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucini, B.; Patella, A.; Istituto Nazionale Fisica Nucleare Sezione di Pisa, Largo Pontecorvo 3, 56126 Pisa

    2007-06-15

    On a compact space with nontrivial cycles, for sufficiently small values of the radii of the compact dimensions, SU(N) gauge theories coupled with fermions in the fundamental representation spontaneously break charge conjugation, time reversal, and parity. We show at one loop in perturbation theory that a physical signature for this phenomenon is a nonzero baryonic current wrapping around the compact directions. The persistence of this current beyond the perturbative regime is checked by lattice simulations.

  20. A high-performance Fortran code to calculate spin- and parity-dependent nuclear level densities

    NASA Astrophysics Data System (ADS)

    Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.

    2013-01-01

    A high-performance Fortran code is developed to calculate the spin- and parity-dependent shell model nuclear level densities. The algorithm is based on the extension of methods of statistical spectroscopy and implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The proton-neutron formalism is used. We have applied the method for calculating the level densities for a set of nuclei in the sd-, pf-, and pf+g- model spaces. Examples of the calculations for 28Si (in the sd-model space) and 64Ge (in the pf+g-model space) are presented. To illustrate the power of the method we estimate the ground state energy of 64Ge in the larger model space pf+g, which is not accessible to direct shell model diagonalization due to the prohibitively large dimension, by comparing with the nuclear level densities at low excitation energy calculated in the smaller model space pf. Program summaryProgram title: MM Catalogue identifier: AENM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 193181 No. of bytes in distributed program, including test data, etc.: 1298585 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Any architecture with a Fortran 90 compiler and MPI. Operating system: Linux. RAM: Proportional to the system size, in our examples, up to 75Mb Classification: 17.15. External routines: MPICH2 (http://www.mcs.anl.gov/research/projects/mpich2/) Nature of problem: Calculating of the spin- and parity-dependent nuclear level density. Solution method: The algorithm implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The code is parallelized using the Message Passing Interface and a master-slaves dynamical load-balancing approach. Restrictions: The program uses two-body interaction in a restricted single-level basis. For example, GXPF1A in the pf-valence space. Running time: Depends on the system size and the number of processors used (from 1 min to several hours).

  1. Characterization of vibrissa germinative cells: transition of cell types.

    PubMed

    Osada, A; Kobayashi, K

    2001-12-01

    Germinative cells, small cell masses attached to the stalks of dermal papillae that are able to differentiate into the hair shaft and inner root sheath, form follicular bulb-like structures when co-cultured with dermal papilla cells. We studied the growth characteristics of germinative cells to determine the cell types in the vibrissa germinative tissue. Germinative tissues, attaching to dermal papillae, were cultured on 3T3 feeder layers. The cultured keratinocytes were harvested and transferred, equally and for two passages, onto lined dermal papilla cells (LDPC) and/or 3T3 feeder layers. The resulting germinative cells were classified into three types in the present experimental condition. Type 1 cells grow very well on either feeder layer, whereas Type 3 cells scarcely grow on either feeder layer. Type 2 cells are very conspicuous and are reversible. They grow well on 3T3 but growth is suppressed on LDPC feeder layers. The Type 2 cells that grow well on 3T3 feeder layers, however, are suppressed when transferred onto LDPC and the Type 2 cells that are suppressed on LDPC begin to grow again on 3T3. The transition of one cell type to another in vitro and the cell types that these germinative cell types correspond to in vivo is discussed. It was concluded that stem cells or their close progenitors reside in the germinative tissues of the vibrissa bulb except at late anagen-early catagen.

  2. An Information Theory-Inspired Strategy for Design of Re-programmable Encrypted Graphene-based Coding Metasurfaces at Terahertz Frequencies.

    PubMed

    Momeni, Ali; Rouhi, Kasra; Rajabalipanah, Hamid; Abdolali, Ali

    2018-04-18

    Inspired by the information theory, a new concept of re-programmable encrypted graphene-based coding metasurfaces was investigated at terahertz frequencies. A channel-coding function was proposed to convolutionally record an arbitrary information message onto unrecognizable but recoverable parity beams generated by a phase-encrypted coding metasurface. A single graphene-based reflective cell with dual-mode biasing voltages was designed to act as "0" and "1" meta-atoms, providing broadband opposite reflection phases. By exploiting graphene tunability, the proposed scheme enabled an unprecedented degree of freedom in the real-time mapping of information messages onto multiple parity beams which could not be damaged, altered, and reverse-engineered. Various encryption types such as mirroring, anomalous reflection, multi-beam generation, and scattering diffusion can be dynamically attained via our multifunctional metasurface. Besides, contrary to conventional time-consuming and optimization-based methods, this paper convincingly offers a fast, straightforward, and efficient design of diffusion metasurfaces of arbitrarily large size. Rigorous full-wave simulations corroborated the results where the phase-encrypted metasurfaces exhibited a polarization-insensitive reflectivity less than -10 dB over a broadband frequency range from 1 THz to 1.7 THz. This work reveals new opportunities for the extension of re-programmable THz-coding metasurfaces and may be of interest for reflection-type security systems, computational imaging, and camouflage technology.

  3. Utilizing photon number parity measurements to demonstrate quantum computation with cat-states in a cavity

    NASA Astrophysics Data System (ADS)

    Petrenko, A.; Ofek, N.; Vlastakis, B.; Sun, L.; Leghtas, Z.; Heeres, R.; Sliwa, K. M.; Mirrahimi, M.; Jiang, L.; Devoret, M. H.; Schoelkopf, R. J.

    2015-03-01

    Realizing a working quantum computer requires overcoming the many challenges that come with coupling large numbers of qubits to perform logical operations. These include improving coherence times, achieving high gate fidelities, and correcting for the inevitable errors that will occur throughout the duration of an algorithm. While impressive progress has been made in all of these areas, the difficulty of combining these ingredients to demonstrate an error-protected logical qubit, comprised of many physical qubits, still remains formidable. With its large Hilbert space, superior coherence properties, and single dominant error channel (single photon loss), a superconducting 3D resonator acting as a resource for a quantum memory offers a hardware-efficient alternative to multi-qubit codes [Leghtas et.al. PRL 2013]. Here we build upon recent work on cat-state encoding [Vlastakis et.al. Science 2013] and photon-parity jumps [Sun et.al. 2014] by exploring the effects of sequential measurements on a cavity state. Employing a transmon qubit dispersively coupled to two superconducting resonators in a cQED architecture, we explore further the application of parity measurements to characterizing such a hybrid qubit/cat state architecture. In so doing, we demonstrate the promise of integrating cat states as central constituents of future quantum codes.

  4. Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin

    2005-02-01

    This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.

  5. On the photonic implementation of universal quantum gates, bell states preparation circuit and quantum LDPC encoders and decoders based on directional couplers and HNLF.

    PubMed

    Djordjevic, Ivan B

    2010-04-12

    The Bell states preparation circuit is a basic circuit required in quantum teleportation. We describe how to implement it in all-fiber technology. The basic building blocks for its implementation are directional couplers and highly nonlinear optical fiber (HNLF). Because the quantum information processing is based on delicate superposition states, it is sensitive to quantum errors. In order to enable fault-tolerant quantum computing the use of quantum error correction is unavoidable. We show how to implement in all-fiber technology encoders and decoders for sparse-graph quantum codes, and provide an illustrative example to demonstrate this implementation. We also show that arbitrary set of universal quantum gates can be implemented based on directional couplers and HNLFs.

  6. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  7. Miscoding and other user errors: importance of ongoing education for proper blood glucose monitoring procedures.

    PubMed

    Schrock, Linda E

    2008-07-01

    This article reviews the literature to date and reports on a new study that documented the frequency of manual code-requiring blood glucose (BG) meters that were miscoded at the time of the patient's initial appointment in a hospital-based outpatient diabetes education program. Between January 1 and May 31, 2007, the type of BG meter and the accuracy of the patient's meter code (if required) and procedure for checking BG were checked during the initial appointment with the outpatient diabetes educator. If indicated, reeducation regarding the procedure for the BG meter code entry and/or BG test was provided. Of the 65 patients who brought their meter requiring manual entry of a code number or code chip to the initial appointment, 16 (25%) were miscoded at the time of the appointment. Two additional problems, one of dead batteries and one of improperly stored test strips, were identified and corrected at the first appointment. These findings underscore the importance of checking the patient's BG meter code (if required) and procedure for testing BG at each encounter with a health care professional or providing the patient with a meter that does not require manual entry of a code number or chip to match the container of test strips (i.e., an autocode meter).

  8. An Empirical Analysis of the Cascade Secret Key Reconciliation Protocol for Quantum Key Distribution

    DTIC Science & Technology

    2011-09-01

    performance with the parity checks within each pass increasing and as a result, the processing time is expected to increase as well. A conclusion is drawn... timely manner has driven efforts to develop new key distribution methods. The most promising method is Quantum Key Distribution (QKD) and is...thank the QKD Project Team for all of the insight and support they provided in such a short time period. Thanks are especially in order for my

  9. Implicit and Explicit Number-Space Associations Differentially Relate to Interference Control in Young Adults With ADHD

    PubMed Central

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2018-01-01

    Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363

  10. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  11. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    PubMed

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.

  12. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less

  13. Implementation of continuous-variable quantum key distribution with discrete modulation

    NASA Astrophysics Data System (ADS)

    Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2017-06-01

    We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.

  14. NASA Tech Briefs, April 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.

  15. A Semiautomated Journal Check-In and Binding System; or Variations on a Common Theme

    PubMed Central

    Livingston, Frances G.

    1967-01-01

    The journal check-in project described here, though based on a computerized system, uses only unit-record equipment and is designed for the medium-sized library. The frequency codes used are based on the date printed on the journal rather than on the expected date of receipt, which allows for more stability in the coding scheme. The journal's volume number and issue number, which in other systems are usually predetermined by a computer, are inserted at the time of check-in. Routine claiming of overdue issues and a systematic binding schedule have also been developed as by-products. PMID:6041836

  16. Evaluating Independently Licensed Counselors' Articulation of Professional Identity Using Structural Coding

    ERIC Educational Resources Information Center

    Burns, Stephanie; Cruikshanks, Daniel R.

    2017-01-01

    Inconsistent counselor professional identity contributes to issues with licensure portability, parity in hiring practices, marketplace recognition in U.S. society and third-party payments for independently licensed counselors. Counselors could benefit from enhancing the counseling profession's identity as well as individual professional identities…

  17. Efficient Type Representation in TAL

    NASA Technical Reports Server (NTRS)

    Chen, Juan

    2009-01-01

    Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.

  18. Design and implementation of the next generation Landsat satellite communications system

    USGS Publications Warehouse

    Mah, Grant R.; O'Brien, Michael; Garon, Howard; Mott, Claire; Ames, Alan; Dearth, Ken

    2012-01-01

    The next generation Landsat satellite, Landsat 8 (L8), also known as the Landsat Data Continuity Mission (LDCM), uses a highly spectrally efficient modulation and data formatting approach to provide large amounts of downlink (D/L) bandwidth in a limited X-Band spectrum allocation. In addition to purely data throughput and bandwidth considerations, there were a number of additional constraints based on operational considerations for prevention of interference with the NASA Deep-Space Network (DSN) band just above the L8 D/L band, minimization of jitter contributions to prevent impacts to instrument performance, and the need to provide an interface to the Landsat International Cooperator (IC) community. A series of trade studies were conducted to consider either X- or Ka-Band, modulation type, and antenna coverage type, prior to the release of the request for proposal (RFP) for the spacecraft. Through use of the spectrally efficient rate-7/8 Low-Density Parity-Check error-correction coding and novel filtering, an XBand frequency plan was developed that balances all the constraints and considerations, while providing world-class link performance, fitting 384 Mbits/sec of data into the 375 MHz X-Band allocation with bit-error rates better than 10-12 using an earth-coverage antenna.

  19. The Essential Component in DNA-Based Information Storage System: Robust Error-Tolerating Module

    PubMed Central

    Yim, Aldrin Kay-Yuen; Yu, Allen Chi-Shing; Li, Jing-Woei; Wong, Ada In-Chun; Loo, Jacky F. C.; Chan, King Ming; Kong, S. K.; Yip, Kevin Y.; Chan, Ting-Fung

    2014-01-01

    The size of digital data is ever increasing and is expected to grow to 40,000 EB by 2020, yet the estimated global information storage capacity in 2011 is <300 EB, indicating that most of the data are transient. DNA, as a very stable nano-molecule, is an ideal massive storage device for long-term data archive. The two most notable illustrations are from Church et al. and Goldman et al., whose approaches are well-optimized for most sequencing platforms – short synthesized DNA fragments without homopolymer. Here, we suggested improvements on error handling methodology that could enable the integration of DNA-based computational process, e.g., algorithms based on self-assembly of DNA. As a proof of concept, a picture of size 438 bytes was encoded to DNA with low-density parity-check error-correction code. We salvaged a significant portion of sequencing reads with mutations generated during DNA synthesis and sequencing and successfully reconstructed the entire picture. A modular-based programing framework – DNAcodec with an eXtensible Markup Language-based data format was also introduced. Our experiments demonstrated the practicability of long DNA message recovery with high error tolerance, which opens the field to biocomputing and synthetic biology. PMID:25414846

  20. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  1. Arbitrary numbers counter fair decisions: trails of markedness in card distribution.

    PubMed

    Schroeder, Philipp A; Pfister, Roland

    2015-01-01

    Converging evidence from controlled experiments suggests that the mere processing of a number and its attributes such as value or parity might affect free choice decisions between different actions. For example the spatial numerical associations of response codes (SNARC) effect indicates the magnitude of a digit to be associated with a spatial representation and might therefore affect spatial response choices (i.e., decisions between a "left" and a "right" option). At the same time, other (linguistic) features of a number such as parity are embedded into space and might likewise prime left or right responses through feature words [odd or even, respectively; markedness association of response codes (MARC) effect]. In this experiment we aimed at documenting such influences in a natural setting. We therefore assessed number-space and parity-space association effects by exposing participants to a fair distribution task in a card playing scenario. Participants drew cards, read out loud their number values, and announced their response choice, i.e., dealing it to a left vs. right player, indicated by Playmobil characters. Not only did participants prefer to deal more cards to the right player, the card's digits also affected response choices and led to a slightly but systematically unfair distribution, supported by a regular SNARC effect and counteracted by a reversed MARC effect. The experiment demonstrates the impact of SNARC- and MARC-like biases in free choice behavior through verbal and visual numerical information processing even in a setting with high external validity.

  2. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  3. Fault-tolerance in Two-dimensional Topological Systems

    NASA Astrophysics Data System (ADS)

    Anderson, Jonas T.

    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates.

  4. PPLN-waveguide-based polarization entangled QKD simulator

    NASA Astrophysics Data System (ADS)

    Gariano, John; Djordjevic, Ivan B.

    2017-08-01

    We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.

  5. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    NASA Astrophysics Data System (ADS)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  6. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  7. A remote monitoring system for patients with implantable ventricular assist devices with a personal handy phone system.

    PubMed

    Okamoto, E; Shimanaka, M; Suzuki, S; Baba, K; Mitamura, Y

    1999-01-01

    The usefulness of a remote monitoring system that uses a personal handy phone for artificial heart implanted patients was investigated. The type of handy phone used in this study was a personal handy phone system (PHS), which is a system developed in Japan that uses the NTT (Nippon Telephone and Telegraph, Inc.) telephone network service. The PHS has several advantages: high-speed data transmission, low power output, little electromagnetic interference with medical devices, and easy locating of patients. In our system, patients have a mobile computer (Toshiba, Libretto 50, Kawasaki, Japan) for data transmission control between an implanted controller and a host computer (NEC, PC-9821V16) in the hospital. Information on the motor rotational angle (8 bits) and motor current (8 bits) of the implanted motor driven heart is fed into the mobile computer from the implanted controller (Hitachi, H8/532, Yokohama, Japan) according to 32-bit command codes from the host computer. Motor current and motor rotational angle data from inside the body are framed together by a control code (frame number and parity) for data error checking and correcting at the receiving site, and the data are sent through the PHS connection to the mobile computer. The host computer calculates pump outflow and arterial pressure from the motor rotational angle and motor current values and displays the data in real-time waveforms. The results of this study showed that accurate data on motor rotational angle and current could be transmitted from the subjects while they were walking or driving a car to the host computer at a data transmission rate of 9600 bps. This system is useful for remote monitoring of patients with an implanted artificial heart.

  8. ANSYS duplicate finite-element checker routine

    NASA Technical Reports Server (NTRS)

    Ortega, R.

    1995-01-01

    An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.

  9. The spectrum of singly ionized tungsten

    NASA Astrophysics Data System (ADS)

    Husain, Abid; Jabeen, S.; Wajid, Abdul

    2018-05-01

    The ab initio calculations were performed using Cowan's computer code for ground configuration5d46s incorporating other interacting even parity configurations 5d36s2 and 5d5, also for the three lowest excited configurations5d46p, 5d36s6p and 5d36s5f of odd parity matrix. The initial energy parameter scaling applied for Eav and ζ at 100% of the HFR values and Fk at 85%, Gk and Rk at 75% of the HFR values. The reported values of levels were taken from NIST ASD levels list. The levels were used to run least square fitted (LSF). This allowed adjusting the energy to the real values and hence a better prediction was achieved.

  10. Security in Active Networks

    DTIC Science & Technology

    1999-01-01

    Some means currently under investigation include domain-speci c languages which are easy to check (e.g., PLAN), proof-carrying code [NL96, Nec97...domain-speci c language coupled to an extension system with heavyweight checks. In this way, the frequent (per- packet) dynamic checks are inexpensive...to CISC architectures remains problematic. Typed assembly language [MWCG98] propagates type safety information to the assembly language level, so

  11. Computer codes for checking, plotting and processing of neutron cross-section covariance data and their application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sartori, E.; Roussin, R.W.

    This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this informationmore » are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.« less

  12. Performance of Compiler-Assisted Memory Safety Checking

    DTIC Science & Technology

    2014-08-01

    software developer has in mind a particular object to which the pointer should point, the intended referent. A memory access error occurs when an ac...Performance of Compiler-Assisted Memory Safety Checking David Keaton Robert C. Seacord August 2014 TECHNICAL NOTE CMU/SEI-2014-TN...based memory safety checking tool and the performance that can be achieved with two such tools whose source code is freely available. The note then

  13. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    NASA Technical Reports Server (NTRS)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  14. Atomic entanglement purification and concentration using coherent state input-output process in low-Q cavity QED regime.

    PubMed

    Cao, Cong; Wang, Chuan; He, Ling-Yan; Zhang, Ru

    2013-02-25

    We investigate an atomic entanglement purification protocol based on the coherent state input-output process by working in low-Q cavity in the atom-cavity intermediate coupling region. The information of entangled states are encoded in three-level configured single atoms confined in separated one-side optical micro-cavities. Using the coherent state input-output process, we design a two-qubit parity check module (PCM), which allows the quantum nondemolition measurement for the atomic qubits, and show its use for remote parities to distill a high-fidelity atomic entangled ensemble from an initial mixed state ensemble nonlocally. The proposed scheme can further be used for unknown atomic states entanglement concentration. Also by exploiting the PCM, we describe a modified scheme for atomic entanglement concentration by introducing ancillary single atoms. As the coherent state input-output process is robust and scalable in realistic applications, and the detection in the PCM is based on the intensity of outgoing coherent state, the present protocols may be widely used in large-scaled and solid-based quantum repeater and quantum information processing.

  15. Incidence and factors associated with early pregnancy losses in Simmental dairy cows.

    PubMed

    Zobel, R; Tkalčić, S; Pipal, I; Buić, V

    2011-09-01

    It has been suggested that management system, milk yield, parity, body condition score and ambient temperature can significantly influence the rate of early pregnancy loss in dairy cattle. The objectives of this study were to establish the extent and patterns of early pregnancy loss from days 32 to 86 of gestation, and to check relationships between management system, milk yield, ambient temperature (quartile), body condition score, bull and parity on the early pregnancy loss rate for Simmental dairy cattle in Croatia. Animals were housed in two dairy farms with two different management systems (pasture based-group A, n=435 and intensive-group B, n=425) with a total of 151 heifers and 709 cows. Overall pregnancy losses were recorded in 67 (7.79%) animals, with late embryonic losses in 30 (44.77%) and early fetal losses in 37 (55.23%) animals (P>0.05). Early pregnancy losses were twofold higher in group B when compared to the group A (P<0.05). More than the half of pregnancy losses were recorded during the III quartile (P<0.05). There was no significant relationship between the paternal bull and pregnancy loss rate. Low body condition score (BCS 2-3) was associated with the highest, while BCS 3.25-4 showed the lowest pregnancy loss rate (P<0.05). The pregnancy loss rate increased in parallel with parity and milk yield increase. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Topological Qubits from Valence Bond Solids

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Sheng; Affleck, Ian; Raussendorf, Robert

    2018-05-01

    Topological qubits based on S U (N )-symmetric valence-bond solid models are constructed. A logical topological qubit is the ground subspace with twofold degeneracy, which is due to the spontaneous breaking of a global parity symmetry. A logical Z rotation by an angle 2 π /N , for any integer N >2 , is provided by a global twist operation, which is of a topological nature and protected by the energy gap. A general concatenation scheme with standard quantum error-correction codes is also proposed, which can lead to better codes. Generic error-correction properties of symmetry-protected topological order are also demonstrated.

  17. Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language

    NASA Astrophysics Data System (ADS)

    Madhavapeddy, Anil

    Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.

  18. Telepharmacy and bar-code technology in an i.v. chemotherapy admixture area.

    PubMed

    O'Neal, Brian C; Worden, John C; Couldry, Rick J

    2009-07-01

    A program using telepharmacy and bar-code technology to increase the presence of the pharmacist at a critical risk point during chemotherapy preparation is described. Telepharmacy hardware and software were acquired, and an inspection camera was placed in a biological safety cabinet to allow the pharmacy technician to take digital photographs at various stages of the chemotherapy preparation process. Once the pharmacist checks the medication vials' agreement with the work label, the technician takes the product into the biological safety cabinet, where the appropriate patient is selected from the pending work list, a queue of patient orders sent from the pharmacy information system. The technician then scans the bar code on the vial. Assuming the bar code matches, the technician photographs the work label, vials, diluents and fluids to be used, and the syringe (before injecting the contents into the bag) along with the vial. The pharmacist views all images as a part of the final product-checking process. This process allows the pharmacist to verify that the correct quantity of medication was transferred from the primary source to a secondary container without being physically present at the time of transfer. Telepharmacy and bar coding provide a means to improve the accuracy of chemotherapy preparation by decreasing the likelihood of using the incorrect product or quantity of drug. The system facilitates the reading of small product labels and removes the need for a pharmacist to handle contaminated syringes and vials when checking the final product.

  19. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  20. Heave and Roll Response of Free Floating Bodies of Cylindrical Shape

    DTIC Science & Technology

    1977-02-01

    27, De 1000 1 a 1,NPARTS ?8. C CH~ ECK ( IF ITS OUJT OF WATER P9 9IF (VDCI) ’I.E. 0.0) Ci’l TO 1000 30. C CHECK SHAPE. 31. Ud TO 1o0003000300...22217 Center 1 ATTN: (Code 460) Cameron Station 1 ATTN: (Code 102-OS) Alexandria, VA 22314 6 ATTN: (Code 1021P) 1 ATTN: (Code 200) Commander Naval

  1. MINIMUM CHECK LIST FOR MECHANICAL PLANS AND SPECIFICATIONS.

    ERIC Educational Resources Information Center

    PIERCE, J.L.

    THIS BULLETIN HAS BEEN PREPARED FOR USE AS A MINIMUM CHECK LIST IN THE DEVELOPMENT AND REVIEW OF MECHANICAL AND ELECTRICAL PLANS AND SPECIFICATIONS BY ENGINEERS, ARCHITECTS, AND SUPERINTENDENTS IN PLANNING PUBLIC SCHOOL FACILITIES. THREE LEVELS OF GUIDELINES ARE MENTIONED--(1) MANDATORY BECAUSE OF LAW, CODE, OR REGULATION, (2) RECOMMENDED AS MOST…

  2. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  3. Static Verification for Code Contracts

    NASA Astrophysics Data System (ADS)

    Fähndrich, Manuel

    The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.

  4. Agricultural Baseline (BL0) scenario

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinckel, Chad M [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  5. ’Offsets’ for NATO Procurement of the Airborne Warning and Control System: Opportunities and Implications

    DTIC Science & Technology

    1976-02-01

    the Anglo-French ; azel ’, the German B9715, and the Italian Ag129--all appear responsi- e to a U.S. need for an Advanced Scout -32- Helicopter (ASH...currencies by OECD using IMF parity rates, or average of annual range of flexible rates. -86- code (individual chemicals, such as sulphuric acid , etc.), it is

  6. The external kink mode in diverted tokamaks

    NASA Astrophysics Data System (ADS)

    Turnbull, A. D.; Hanson, J. M.; Turco, F.; Ferraro, N. M.; Lanctot, M. J.; Lao, L. L.; Strait, E. J.; Piovesan, P.; Martin, P.

    2016-06-01

    > . The resistive kink behaves much like the ideal kink with predominantly kink or interchange parity and no real sign of a tearing component. However, the growth rates scale with a fractional power of the resistivity near the surface. The results have a direct bearing on the conventional edge cutoff procedures used in most ideal MHD codes, as well as implications for ITER and for future reactor options.

  7. To amend the Internal Revenue Code of 1986 to extend the rule providing parity for exclusion from income for employer-provided mass transit and parking benefits.

    THOMAS, 113th Congress

    Rep. Norton, Eleanor Holmes [D-DC-At Large

    2013-12-12

    House - 12/12/2013 Referred to the House Committee on Ways and Means. (All Actions) Notes: For further action, see H.R.5771, which became Public Law 113-295 on 12/19/2014. Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  8. Physical implementation of protected qubits

    NASA Astrophysics Data System (ADS)

    Douçot, B.; Ioffe, L. B.

    2012-07-01

    We review the general notion of topological protection of quantum states in spin models and its relation with the ideas of quantum error correction. We show that topological protection can be viewed as a Hamiltonian realization of error correction: for a quantum code for which the minimal number of errors that remain undetected is N, the corresponding Hamiltonian model of the effects of the environment noise appears only in the Nth order of the perturbation theory. We discuss the simplest model Hamiltonians that realize topological protection and their implementation in superconducting arrays. We focus on two dual realizations: in one the protected state is stored in the parity of the Cooper pair number, in the other, in the parity of the flux number. In both cases the superconducting arrays allow a number of fault-tolerant operations that should make the universal quantum computation possible.

  9. Robust Deterministic Controlled Phase-Flip Gate and Controlled-Not Gate Based on Atomic Ensembles Embedded in Double-Sided Optical Cavities

    NASA Astrophysics Data System (ADS)

    Liu, A.-Peng; Cheng, Liu-Yong; Guo, Qi; Zhang, Shou

    2018-02-01

    We first propose a scheme for controlled phase-flip gate between a flying photon qubit and the collective spin wave (magnon) of an atomic ensemble assisted by double-sided cavity quantum systems. Then we propose a deterministic controlled-not gate on magnon qubits with parity-check building blocks. Both the gates can be accomplished with 100% success probability in principle. Atomic ensemble is employed so that light-matter coupling is remarkably improved by collective enhancement. We assess the performance of the gates and the results show that they can be faithfully constituted with current experimental techniques.

  10. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  11. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  12. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  13. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  14. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  15. Framework for Evaluating Loop Invariant Detection Games in Relation to Automated Dynamic Invariant Detectors

    DTIC Science & Technology

    2015-09-01

    Detectability ...............................................................................................37 Figure 20. Excel VBA Codes for Checker...National Vulnerability Database OS Operating System SQL Structured Query Language VC Verification Condition VBA Visual Basic for Applications...checks each of these assertions for detectability by Daikon. The checker is an Excel Visual Basic for Applications ( VBA ) script that checks the

  16. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  17. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  18. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  19. 19 CFR 24.24 - Harbor maintenance fee.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-662, as amended] Port code, port name and state Port descriptions and notations Alabama 1901—Mobile... http://www.pay.gov or, alternatively, mailed with a single check or money order payable to U.S. Customs... other duty, tax, or fee is imposed on the shipment, and the fee exceeds $3, a check or money order for...

  20. 26 CFR 301.6867-1 - Presumptions where owner of large amount of cash is not identified.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Revenue Code, relating to abatements, credits, and refunds, and may not institute a suit for refund in... 6532(c), relating to the 9-month statute of limitations for suits under section 7426. In addition, the...) Postage stamps; (F) Traveler's checks in any form; (G) Negotiable instruments (including personal checks...

  1. 26 CFR 301.6867-1 - Presumptions where owner of large amount of cash is not identified.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Revenue Code, relating to abatements, credits, and refunds, and may not institute a suit for refund in... 6532(c), relating to the 9-month statute of limitations for suits under section 7426. In addition, the...) Postage stamps; (F) Traveler's checks in any form; (G) Negotiable instruments (including personal checks...

  2. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  3. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  4. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  5. NOAA Weather Radio

    Science.gov Websites

    Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME does not make an audible beep. However, it will display the words "CHECK RECEPTION" until it Reception Problems General Information NWR Alarms FIPS Codes Automated Voices Spanish Voices Public

  6. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.

  7. Defining the Dormant Tumor Microenvironment for Breast Cancer Prevention and Treatment

    DTIC Science & Technology

    2011-09-01

    USAMRMC a. REPORT U b . ABSTRACT U c. THIS PAGE U UU 36 19b. TELEPHONE NUMBER (include area code) 2 TABLE OF CONTENTS...Each group will include 12 rats. b ) Breed rats in the parous group. After parturition, normalize the number of pups to eight. Ten days after...solution ultrasonication assisted tryptic digestion (UATD) method. Parity arm completed. b ) Analyzed the digested mammary ECM samples from

  8. 76 FR 71315 - Endangered and Threatened Species; Take of Anadromous Fish

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... the fish and would then sample them for biological information (fin tissue and scale samples). They..., measured, weighed, tissue-sampled, and checked for external marks and coded-wire tags depending on the.... Then the researchers would remove and preserve fish body tissues, otoliths, and coded wire tags (from...

  9. High spin structure and intruder configurations in 31P

    NASA Astrophysics Data System (ADS)

    Ionescu-Bujor, M.; Iordachescu, A.; Napoli, D. R.; Lenzi, S. M.; Mărginean, N.; Otsuka, T.; Utsuno, Y.; Ribas, R. V.; Axiotis, M.; Bazzacco, D.; Bizzeti-Sona, A. M.; Bizzeti, P. G.; Brandolini, F.; Bucurescu, D.; Cardona, M. A.; De Angelis, G.; De Poli, M.; Della Vedova, F.; Farnea, E.; Gadea, A.; Hojman, D.; Kalfas, C. A.; Kröll, Th.; Lunardi, S.; Martínez, T.; Mason, P.; Pavan, P.; Quintana, B.; Alvarez, C. Rossi; Ur, C. A.; Vlastou, R.; Zilio, S.

    2006-02-01

    The nucleus 31P has been studied in the 24Mg(16O,2αp) reaction with a 70-MeV 16O beam. A complex level scheme extended up to spins 17/2+ and 15/2-, on positive and negative parity, respectively, has been established. Lifetimes for the new states have been investigated by the Doppler shift attenuation method. Two shell-model calculations have been performed to describe the experimental data, one by using the code ANTOINE in a valence space restricted to the sd shell, and the other by applying the Monte Carlo shell model in a valence space including the sd-fp shells. The latter calculation indicates that intruder excitations, involving the promotion of a T=0 proton-neutron pair to the fp shell, play a dominant role in the structure of the positive-parity high-spin states of 31P.

  10. Telemetry Standards, RCC Standard 106-17. Chapter 8. Digital Data Bus Acquisition Formatting Standard

    DTIC Science & Technology

    2017-07-01

    8-3 8.4.1 Characteristics of a Singular Composite Output Signal ...................................... 8-3 8.5 Single Bus Track Spread Recording ...Format .............................................................. 8-5 8.5.1 Single Bus Recording Technique Characteristics...check FCS frame check sequence HDDR high-density digital recording MIL-STD Military Standard msb most significant bit PCM pulse code modulation

  11. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    2018-03-01

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  12. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  13. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  14. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  15. Majorana fermion surface code for universal quantum computation

    DOE PAGES

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-12-10

    In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less

  16. Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation.

    PubMed

    Djordjevic, Ivan B

    2011-07-18

    In order to achieve multi-gigabit transmission (projected for 2020) for the use in interplanetary communications, the usage of large number of time slots in pulse-position modulation (PPM), typically used in deep-space applications, is needed, which imposes stringent requirements on system design and implementation. As an alternative satisfying high-bandwidth demands of future interplanetary communications, while keeping the system cost and power consumption reasonably low, in this paper, we describe the use of orbital angular momentum (OAM) as an additional degree of freedom. The OAM is associated with azimuthal phase of the complex electric field. Because OAM eigenstates are orthogonal the can be used as basis functions for N-dimensional signaling. The OAM modulation and multiplexing can, therefore, be used, in combination with other degrees of freedom, to solve the high-bandwidth requirements of future deep-space and near-Earth optical communications. The main challenge for OAM deep-space communication represents the link between a spacecraft probe and the Earth station because in the presence of atmospheric turbulence the orthogonality between OAM states is no longer preserved. We will show that in combination with LDPC codes, the OAM-based modulation schemes can operate even under strong atmospheric turbulence regime. In addition, the spectral efficiency of proposed scheme is N2/log2N times better than that of PPM.

  17. The use of self checks and voting in software error detection - An empirical study

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.

    1990-01-01

    The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.

  18. Agricultural Baseline (BL0) scenario of the 2016 Billion-Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee, APAC] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  19. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons.

    PubMed

    Mertz, Marcel; Sofaer, Neema; Strech, Daniel

    2014-09-27

    The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?

  20. Prototype data terminal-multiplexer/demultiplexer

    NASA Technical Reports Server (NTRS)

    Leck, D. E.; Goodwin, J. E.

    1972-01-01

    The design and operation of a quad redundant data terminal and a multiplexer/demultiplexer (MDU) is described. The most unique feature is the design of the quad redundant data terminal. This is one of the few designs where the unit is fail/op, fail/op, fail/safe. Laboratory tests confirm that the unit will operate satisfactorily with the failure of three out of four channels. Although the design utilizes state-of-the-art technology, the waveform error checks, the voting techniques, and the parity bit checks are believed to be used in unique configurations. Correct word selection routines are also novel. The MDU design, while not redundant, utilizes, the latest state-of-the-art advantages of light coupler and interested amplifiers. Much of the technology employed was an evolution of prior NASA contracts related to the Addressable Time Division Data System. A good example of the earlier technology development was the development of a low level analog multiplexer, a high level analog multiplexer, and a digital multiplexer. A list of all drawings is included for reference and all schematic, block and timing diagrams are incorporated.

  1. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.

    PubMed

    Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland

    2011-04-08

    In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  2. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    PubMed Central

    2011-01-01

    Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310

  3. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  4. DataPlus™ - a revolutionary applications generator for DOS hand-held computers

    Treesearch

    David Dean; Linda Dean

    2000-01-01

    DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...

  5. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  6. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  7. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... codes shall be consistent with SAE J2012 “Diagnostic Trouble Code Definitions—Equivalent to ISO/DIS... sent to the scan tool over a J1850 data link shall use the Cyclic Redundancy Check and the three byte..., definitions and abbreviations shall be formatted according to SAE J1930 “Electrical/Electronic Systems...

  8. Effects of parity and periconceptional metabolic state of Holstein-Friesian dams on the glucose metabolism and conformation in their newborn calves.

    PubMed

    Bossaert, P; Fransen, E; Langbeen, A; Stalpaert, M; Vandenbroeck, I; Bols, P E; Leroy, J L

    2014-06-01

    The metabolic state of pregnant mammals influences the offspring's development and risk of metabolic disease in postnatal life. The metabolic state in a lactating dairy cow differs immensely from that in a non-lactating heifer around the time of conception, but consequences for their calves are poorly understood. The hypothesis of this study was that differences in metabolic state between non-lactating heifers and lactating cows during early pregnancy would affect insulin-dependent glucose metabolism and development in their neonatal calves. Using a mixed linear model, concentrations of glucose, IGF-I and non-esterified fatty acids (NEFAs) were compared between 13 non-lactating heifers and 16 high-yielding dairy cows in repeated blood samples obtained during the 1st month after successful insemination. Calves born from these dams were weighed and measured at birth, and subjected to intravenous glucose and insulin challenges between 7 and 14 days of age. Eight estimators of insulin-dependent glucose metabolism were determined: glucose and insulin peak concentration, area under the curve and elimination rate after glucose challenge, glucose reduction rate after insulin challenge, and quantitative insulin sensitivity check index. Effects of dam parity and calf sex on the metabolic and developmental traits were analysed in a two-way ANOVA. Compared with heifers, cows displayed lower glucose and IGF-I and higher NEFA concentrations during the 1st month after conception. However, these differences did not affect developmental traits and glucose homeostasis in their calves: birth weight, withers height, heart girth, and responses to glucose and insulin challenges in the calves were unaffected by their dam's parity. In conclusion, differences in the metabolic state of heifers and cows during early gestation under field conditions could not be related to their offspring's development and glucose homeostasis.

  9. Three questions you need to ask about your brand.

    PubMed

    Keller, Kevin Lane; Sternthal, Brian; Tybout, Alice

    2002-09-01

    Traditionally, the people responsible for positioning brands have concentrated on the differences that set each brand apart from the competition. But emphasizing differences isn't enough to sustain a brand against competitors. Managers should also consider the frame of reference within which the brand works and the features the brand shares with other products. Asking three questions about your brand can help: HAVE WE ESTABLISHED A FRAME?: A frame of reference--for Coke, it might be as narrow as other colas or as broad as all thirst-quenching drinks--signals to consumers the goal they can expect to achieve by using a brand. Brand managers need to pay close attention to this issue, in some cases expanding their focus in order to preempt the competition. ARE WE LEVERAGING OUR POINTS OF PARITY?: Certain points of parity must be met if consumers are to perceive your product as a legitimate player within its frame of reference. For instance, consumers might not consider a bank truly a bank unless it offers checking and savings plans. ARE THE POINTS OF DIFFERENCE COMPELLING?: A distinguishing characteristic that consumers find both relevant and believable can become a strong, favorable, unique brand association, capable of distinguishing the brand from others in the same frame of reference. Frames of reference, points of parity, and points of difference are moving targets. Maytag isn't the only dependable brand of appliance, Tide isn't the only detergent with whitening power, and BMWs aren't the only cars on the road with superior handling. The key questions you need to ask about your brand may not change, but their context certainly will. The saviest brand positioners are also the most vigilant.

  10. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    PubMed

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  11. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  12. Certifying Domain-Specific Policies

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Pressburger, Thomas; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2001-01-01

    Proof-checking code for compliance to safety policies potentially enables a product-oriented approach to certain aspects of software certification. To date, previous research has focused on generic, low-level programming-language properties such as memory type safety. In this paper we consider proof-checking higher-level domain -specific properties for compliance to safety policies. The paper first describes a framework related to abstract interpretation in which compliance to a class of certification policies can be efficiently calculated Membership equational logic is shown to provide a rich logic for carrying out such calculations, including partiality, for certification. The architecture for a domain-specific certifier is described, followed by an implemented case study. The case study considers consistency of abstract variable attributes in code that performs geometric calculations in Aerospace systems.

  13. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  14. Continuous operation of four-state continuous-variable quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2016-10-01

    We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.

  15. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  16. [Development of operation patient security detection system].

    PubMed

    Geng, Shu-Qin; Tao, Ren-Hai; Zhao, Chao; Wei, Qun

    2008-11-01

    This paper describes a patient security detection system developed with two dimensional bar codes, wireless communication and removal storage technique. Based on the system, nurses and correlative personnel check code wait operation patient to prevent the defaults. The tests show the system is effective. Its objectivity and currency are more scientific and sophisticated than current traditional method in domestic hospital.

  17. Nurse-Facilitated Health Checks for Persons With Severe Mental Illness: A Cluster-Randomized Controlled Trial.

    PubMed

    White, Jacquie; Lucas, Joanne; Swift, Louise; Barton, Garry R; Johnson, Harriet; Irvine, Lisa; Abotsie, Gabriel; Jones, Martin; Gray, Richard J

    2018-05-01

    This study tested the effectiveness of a nurse-delivered health check with the Health Improvement Profile (HIP), which takes approximately 1.5 hours to complete and code, for persons with severe mental illness. A single-blind, cluster-randomized controlled trial was conducted in England to test whether health checks improved the general medical well-being of persons with severe mental illness at 12-month follow-up. Sixty nurses were randomly assigned to the HIP group or the treatment-as-usual group. From their case lists, 173 patients agreed to participate. HIP group nurses completed health checks for 38 of their 90 patients (42%) at baseline and 22 (24%) at follow-up. No significant between-group differences were noted in patients' general medical well-being at follow-up. Nurses who had volunteered for a clinical trial administered health checks only to a minority of participating patients, suggesting that it may not be feasible to undertake such lengthy structured health checks in routine practice.

  18. Risk of labor dystocia increases with maternal age irrespective of parity: a population-based register study.

    PubMed

    Waldenström, Ulla; Ekéus, Cecilia

    2017-09-01

    Advanced maternal age is associated with labor dystocia (LD) in nulliparous women. This study investigates the age-related risk of LD in first, second and third births. All live singleton cephalic births at term (≥ 37 gestational weeks) recorded in the Swedish Medical Birth Register from 1999 to 2011, except elective cesarean sections and fourth births and more, in total 998 675 pregnancies, were included in the study. LD was defined by International Classification of Diseases, version 10 codes (O620, O621, O622, O629, O630, O631 and O639). In each parity group risks of LD at age 25-29 years, 30-34 years, 35-39 years and ≥ 40 years compared with age < 25 years were investigated by logistic regression analyses. Analyses were adjusted for year of delivery, education, country/region of birth, smoking in early pregnancy, maternal height, body mass index, week of gestation, fetal presentation and infant birthweight. Rates of LD were 22.5%, 6.1% and 4% in first, second and third births, respectively. Adjusted odd ratios (OR) for LD increased progressively from the youngest to the oldest age group, irrespective of parity. At age 35-39 years the adjusted OR (95% CI) was approximately doubled compared with age 25 and younger: 2.13 (2.06-2.20) in first birth; 2.05 (1.91-2.19) in second births; and 1.81 (1.49-2.21) in third births. Maternal age is an independent risk factor for LD in first, second and third births. Although age-related risks by parity are relatively similar, more nulliparous than parous women will be exposed to LD due to the higher rate. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  19. The four-loop six-gluon NMHV ratio function

    DOE PAGES

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N=4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q¯ differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We test the result againstmore » multi-Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. As a result, we also provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  20. The four-loop six-gluon NMHV ratio function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N = 4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q - differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We testmore » the result against multi- Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We also study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. Furthermore, we provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  1. Free-space quantum key distribution at night

    NASA Astrophysics Data System (ADS)

    Buttler, William T.; Hughes, Richard J.; Kwiat, Paul G.; Lamoreaux, Steve K.; Luther, Gabriel G.; Morgan, George L.; Nordholt, Jane E.; Peterson, C. Glen; Simmons, Charles M.

    1998-07-01

    An experimental free-space quantum key distribution (QKD) system has been tested over an outdoor optical path of approximately 1 km under nighttime conditions at Los Alamos National Laboratory. This system employs the Bennett 92 protocol; here we give a brief overview of this protocol, and describe our experimental implementation of it. An analysis of the system efficiency is presented as well as a description of our error detection protocol, which employs a 2D parity check scheme. Finally, the susceptibility of this system to eavesdropping by various techniques is determined, and the effectiveness of privacy amplification procedures is discussed. Our conclusions are that free-space QKD is both effective and secure; possible applications include the rekeying of satellites in low earth orbit.

  2. A novel comparator featured with input data characteristic

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaobo; Ye, Desheng; Xu, Xiangmin; Zheng, Shuai

    2016-03-01

    Two types of low-power asynchronous comparators featured with input data statistical characteristic are proposed in this article. The asynchronous ripple comparator stops comparing at the first unequal bit but delivers the result to the least significant bit. The pre-stop asynchronous comparator can completely stop comparing and obtain results immediately. The proposed and contrastive comparators were implemented in SMIC 0.18 μm process with different bit widths. Simulation shows that the proposed pre-stop asynchronous comparator features the lowest power consumption, shortest average propagation delay and highest area efficiency among the comparators. Data path of low-density parity check decoder using the proposed pre-stop asynchronous comparators are most power efficient compared with other data paths with synthesised, clock gating and bitwise competition logic comparators.

  3. Long-range multi-carrier acoustic communications in shallow water based on iterative sparse channel estimation.

    PubMed

    Kang, Taehyuk; Song, H C; Hodgkiss, W S; Soo Kim, Jea

    2010-12-01

    Long-range orthogonal frequency division multiplexing (OFDM) acoustic communications is demonstrated using data from the Kauai Acomms MURI 2008 (KAM08) experiment carried out in about 106 m deep shallow water west of Kauai, HI, in June 2008. The source bandwidth was 8 kHz (12-20 kHz), and the data were received by a 16-element vertical array at a distance of 8 km. Iterative sparse channel estimation is applied in conjunction with low-density parity-check decoding. In addition, the impact of diversity combining in a highly inhomogeneous underwater environment is investigated. Error-free transmission using 16-quadtrative amplitude modulation is achieved at a data rate of 10 kb/s.

  4. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  5. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  6. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  7. Software Security Knowledge: CWE. Knowing What Could Make Software Vulnerable to Attack

    DTIC Science & Technology

    2011-05-01

    shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1...Buffer • CWE-642: External Control of Critical State Data • CWE-73: External Control of File Name or Path • CWE-426: Untrusted Search Path • CWE...94: Failure to Control Generation of Code (aka ’Code Injection’) • CWE-494: Download of Code Without Integrity Check • CWE-404: Improper Resource

  8. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  9. Efficient eigenvalue determination for arbitrary Pauli products based on generalized spin-spin interactions

    NASA Astrophysics Data System (ADS)

    Leibfried, D.; Wineland, D. J.

    2018-03-01

    Effective spin-spin interactions between ? qubits enable the determination of the eigenvalue of an arbitrary Pauli product of dimension N with a constant, small number of multi-qubit gates that is independent of N and encodes the eigenvalue in the measurement basis states of an extra ancilla qubit. Such interactions are available whenever qubits can be coupled to a shared harmonic oscillator, a situation that can be realized in many physical qubit implementations. For example, suitable interactions have already been realized for up to 14 qubits in ion traps. It should be possible to implement stabilizer codes for quantum error correction with a constant number of multi-qubit gates, in contrast to typical constructions with a number of two-qubit gates that increases as a function of N. The special case of finding the parity of N qubits only requires a small number of operations that is independent of N. This compares favorably to algorithms for computing the parity on conventional machines, which implies a genuine quantum advantage.

  10. 77 FR 33635 - Amendment to the Bank Secrecy Act Regulations-Requirement That Clerks of Court Report Certain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... BSA and associated regulations.\\3\\ FinCEN is authorized to impose anti-money laundering (``AML... Code); (iii) Money laundering (as defined in section 1956 or 1957 of title 18 of the United States Code... money in the country in which issued; and (ii) A cashier's check (by whatever name called, including...

  11. Applying Jlint to Space Exploration Software

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus

    2004-01-01

    Java is a very successful programming language which is also becoming widespread in embedded systems, where software correctness is critical. Jlint is a simple but highly efficient static analyzer that checks a Java program for several common errors, such as null pointer exceptions, and overflow errors. It also includes checks for multi-threading problems, such as deadlocks and data races. The case study described here shows the effectiveness of Jlint in find-false positives in the multi-threading warnings gives an insight into design patterns commonly used in multi-threaded code. The results show that a few analysis techniques are sufficient to avoid almost all false positives. These techniques include investigating all possible callers and a few code idioms. Verifying the correct application of these patterns is still crucial, because their correct usage is not trivial.

  12. Combined methods of tolerance increasing for embedded SRAM

    NASA Astrophysics Data System (ADS)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  13. Low-dose cardio-respiratory phase-correlated cone-beam micro-CT of small animals.

    PubMed

    Sawall, Stefan; Bergner, Frank; Lapp, Robert; Mronz, Markus; Karolczak, Marek; Hess, Andreas; Kachelriess, Marc

    2011-03-01

    Micro-CT imaging of animal hearts typically requires a double gating procedure because scans during a breath-hold are not possible due to the long scan times and the high respiratory rates, Simultaneous respiratory and cardiac gating can either be done prospectively or retrospectively. True five-dimensional information can be either retrieved with retrospective gating or with prospective gating if several prospective gates are acquired. In any case, the amount of information available to reconstruct one volume for a given respiratory and cardiac phase is orders of magnitud lower than the total amount of information acquired. For example, the reconstruction of a volume from a 10% wide respiratory and a 20% wide cardiac window uses only 2% of the data acquired. Achieving a similar image quality as a nongated scan would therefore require to increase the amount of data and thereby the dose to the animal by up to a factor of 50. To achieve the goal of low-dose phase-correlated (LDPC) imaging, the authors propose to use a highly efficient combination of slightly modified existing algorithms. In particular, the authors developed a variant of the McKinnon-Bates image reconstruction algorithm and combined it with bilateral filtering in up to five dimensions to significantly reduce image noise without impairing spatial or temporal resolution. The preliminary results indicate that the proposed LDPC reconstruction method typically reduces image noise by a factor of up to 6 (e.g., from 170 to 30 HU), while the dose values lie in a range from 60 to 500 mGy. Compared to other publications that apply 250-1800 mGy for the same task [C. T. Badea et al., "4D micro-CT of the mouse heart," Mol. Imaging 4(2), 110-116 (2005); M. Drangova et al., "Fast retrospectively gated quantitative four-dimensional (4D) cardiac micro computed tomography imaging of free-breathing mice," Invest. Radiol. 42(2), 85-94 (2007); S. H. Bartling et al., "Retrospective motion gating in small animal CT of mice and rats," Invest. Radiol. 42(10), 704-714 (2007)], the authors' LDPC approach therefore achieves a more than tenfold dose usage improvement. The LDPC reconstruction method improves phase-correlated imaging from highly undersampled data. Artifacts caused by sparse angular sampling are removed and the image noise is decreased, while spatial and temporal resolution are preserved. Thus, the administered dose per animal can be decreased allowing for long-term studies with reduced metabolic inference.

  14. On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations

    NASA Astrophysics Data System (ADS)

    Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.

    2015-08-01

    The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.

  15. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  16. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  17. Data Quality Control and Maintenance for the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Heiner, Nicholas; Spayde, Damon

    2014-03-01

    The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.

  18. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  19. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    NASA Astrophysics Data System (ADS)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.

  20. Toroidal modeling of the n = 1 intrinsic error field correction experiments in EAST

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Liu, Yueqiang; Sun, Youwen; Wang, Huihui; Gu, Shuai; Jia, Manni; Li, Li; Liu, Yue; Wang, Zhirui; Zhou, Lina

    2018-05-01

    The m/n = 2/1 resonant vacuum error field (EF) in the EAST tokamak experiments, inferred from the compass coil current amplitude and phase scan for mode locking, was found to depend on the parity between the upper and lower rows of the EF correction (EFC) coils (Wang et al 2016 Nucl. Fusion 56 066011). Here m and n are the poloidal and toroidal harmonic numbers in a torus, respectively. This experimental observation implies that the compass scan results cannot be simply interpreted as reflecting the true intrinsic EF. This work aims at understanding this puzzle, based on toroidal modeling of the EFC plasma discharge in EAST using the MARS-F code (Liu et al 2000 Phys. Plasmas 7 3681). By varying the amplitude and phase of the assumed n = 1 intrinsic vacuum EF with different poloidal spectra, and by computing the plasma response to the assumed EF, the compass scan predicted 2/1 EF, based on minimizing the computed resonant electromagnetic torque, can be made to match well with that of the EFC experiments using both even and odd parity coils. Moreover, the compass scan predicted vacuum EFs are found to be significantly differing from the true intrinsic EF used as input to the MARS-F code. While the puzzling result remains to be fully resolved, the results from this study offer an improved understanding of the EFC experiments and the compass scan technique for determining the intrinsic resonant EF.

  1. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  2. FSCATT: Angular Dependence and Filter Options.

    DTIC Science & Technology

    The input routines to the code have been completely rewritten to allow for a free-form input format. The input routines now provide self-consistency checks and diagnostics for the user’s edification .

  3. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  4. Lake water quality mapping from Landsat

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.

    1977-01-01

    In the project described remote sensing was used to check the quality of lake waters. The lakes of three Landsat scenes were mapped with the Bendix MDAS multispectral analysis system. From the MDAS color coded maps, the lake with the worst algae problem was easily located. The lake was closely checked, and the presence of 100 cows in the springs which fed the lake could be identified as the pollution source. The laboratory and field work involved in the lake classification project is described.

  5. Specification Improvement Through Analysis of Proof Structure (SITAPS): High Assurance Software Development

    DTIC Science & Technology

    2016-02-01

    from the tools being used. For example, while Coq proves properties it does not dump an explanation of the proofs in any currently supported form. The...Distribution Unlimited 5 Hotel room locks and card keys use a simple protocol to manage the transition of rooms from one guest to the next. The lock...retains that guest key’s code. A new guest checks in and gets a card with a new current code, and the previous code set to the previous guest’s current

  6. System description: IVY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCune, W.; Shumsky, O.

    2000-02-04

    IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.

  7. Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.

    PubMed

    Henry, R; Tiselj, I; Snoj, L

    2015-03-01

    New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  9. An analytical benchmark and a Mathematica program for MD codes: Testing LAMMPS on the 2nd generation Brenner potential

    NASA Astrophysics Data System (ADS)

    Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.

    2016-10-01

    An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.

  10. Gender bias in scholarly peer review.

    PubMed

    Helmer, Markus; Schottdorf, Manuel; Neef, Andreas; Battaglia, Demian

    2017-03-21

    Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing.

  11. C -P -T anomaly matching in bosonic quantum field theory and spin chains

    NASA Astrophysics Data System (ADS)

    Sulejmanpasic, Tin; Tanizaki, Yuya

    2018-04-01

    We consider the O (3 ) nonlinear sigma model with the θ term and its linear counterpart in 1+1D. The model has discrete time-reflection and space-reflection symmetries at any θ , and enjoys the periodicity in θ →θ +2 π . At θ =0 ,π it also has a charge-conjugation C symmetry. Gauging the discrete space-time reflection symmetries is interpreted as putting the theory on the nonorientable R P2 manifold, after which the 2 π periodicity of θ and the C symmetry at θ =π are lost. We interpret this observation as a mixed 't Hooft anomaly among charge-conjugation C , parity P , and time-reversal T symmetries when θ =π . Anomaly matching implies that in this case the ground state cannot be trivially gapped, as long as C ,P , and T are all good symmetries of the theory. We make several consistency checks with various semiclassical regimes, and with the exactly solvable XYZ model. We interpret this anomaly as an anomaly of the corresponding spin-half chains with translational symmetry, parity, and time reversal [but not involving the SO(3)-spin symmetry], requiring that the ground state is never trivially gapped, even if SO(3) spin symmetry is explicitly and completely broken. We also consider generalizations to C PN -1 models and show that the C -P -T anomaly exists for even N .

  12. The Problem of Modeling the Elastomechanics in Engineering

    DTIC Science & Technology

    1990-02-01

    element method by the code PROBE (McNeil Schwendler- Noetic ) and STRIPE (Aeronautical Institute of Sweden). These codes have various error checks so that...Mindlin solutions converge to the Kirchhoff solution as d--O, see eg. [12), [19]. For a detailed study of the asymptotic behavior of Reissner...of study and research for foreign students in numerical mathematics who are supported by foreign govern- ments or exchange agencies (Fulbright, etc

  13. Automated Diversity in Computer Systems

    DTIC Science & Technology

    2005-09-01

    traces that started with trace heads , namely backwards- taken branches. These branches are indicative of loops within the program, and Dynamo assumes that...would be the ones the program would normally take. Therefore when a trace head became hot (was visited enough times), only a single code trace would...all encountered trace heads . When an interesting instruction is being emulated, the tracing code checks to see if it has been encountered before

  14. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Pompos, A; Jiang, S

    Purpose: To put forth an innovative clinical paradigm for weekly chart checking so that treatment status is periodically checked accurately and efficiently. This study also aims to help optimize the chart checking clinical workflow in a busy radiation therapy clinic. Methods: It is mandated by the Texas Administrative code to check patient charts of radiation therapy once a week or every five fractions, however it varies drastically among institutions in terms of when and how it is done. Some do it every day, but a lot of efforts are wasted on opening ineligible charts; some do it on a fixedmore » day but the distribution of intervals between subsequent checks is not optimal. To establish an optimal chart checking procedure, a new paradigm was developed to achieve 1) charts are checked more accurately and more efficiently; 2) charts are checked on optimal days without any miss; 3) workload is evened out throughout a week when multiple physicists are involved. All active charts will be accessed by querying the R&V system. Priority is assigned to each chart based on the number of days before the next due date followed by sorting and workload distribution steps. New charts are also taken into account when distributing the workload so it is reasonably even throughout the week. Results: Our clinical workflow became more streamlined and smooth. In addition, charts get checked in a more timely fashion so that errors would get caught earlier should they occur. Conclusion: We developed a new weekly chart checking diagram. It helps physicists check charts in a timely manner, saves their time in busy clinics, and consequently reduces possible errors.« less

  16. Reduced circuit implementation of encoder and syndrome generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trager, Barry M; Winograd, Shmuel

    An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less

  17. Utilization of maternal healthcare among adolescent mothers in urban India: evidence from DLHS-3.

    PubMed

    Singh, Aditya; Kumar, Abhishek; Pranjali, Pragya

    2014-01-01

    Background. Low use of maternal healthcare services is one of the reasons why maternal mortality is still considerably high among adolescents mothers in India. To increase the utilization of these services, it is necessary to identify factors that affect service utilization. To our knowledge, no national level study in India has yet examined the issue in the context urban adolescent mothers. The present study is an attempt to fill this gap. Data and Methods. Using information from the third wave of District Level Household Survey (2007-08), we have examined factors associated with the utilization of maternal healthcare services among urban Indian married adolescent women (aged 13-19 years) who have given live/still births during last three years preceding the survey. The three outcome variables included in the analyses are 'full antenatal care (ANC)', 'safe delivery' and 'postnatal care within 42 days of delivery'. We have used Chi-square test to determine the difference in proportion and the binary logistic regression to understand the net effect of predictor variables on the utilization of maternity care. Results. About 22.9% of mothers have received full ANC, 65.1% of mothers have had at least one postnatal check-up within 42 days of pregnancy. The proportion of mother having a safe delivery, i.e., assisted by skilled personnel, is about 70.5%. Findings indicate that there is considerable amount of variation in use of maternity care by educational attainment, household wealth, religion, parity and region of residence. Receiving full antenatal care is significantly associated with mother's education, religion, caste, household wealth, parity, exposure to healthcare messages and region of residence. Mother's education, full antenatal care, parity, household wealth, religion and region of residence are also statistically significant in case of safe delivery. The use of postnatal care is associated with household wealth, woman's education, full antenatal care, safe delivery care and region of residence. Conclusion. Several socioeconomic and demographic factors affect the utilization of maternal healthcare services among urban adolescent women in India. Promoting the use of family planning, female education and higher age at marriage, targeting vulnerable groups such as poor, illiterate, high parity women, involving media and grass root level workers and collaboration between community leaders and health care system could be some important policy level interventions to address the unmet need of maternity services among urban adolescents.

  18. Design of parity generator and checker circuit using electro-optic effect of Mach-Zehnder interferometers

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh; Chanderkanta; Amphawan, Angela

    2016-04-01

    Parity is an extra bit which is used to add in digital information to detect error at the receiver end. It can be even and odd parity. In case of even parity, the number of one's will be even included the parity and reverse in the case of odd parity. The circuit which is used to generate the parity at the transmitter side, called the parity generator and the circuit which is used to detect the parity at receiver side is called as parity checker. In this paper, an even and odd parity generator and checker circuits are designed using electro-optic effect inside lithium niobate based Mach-Zehnder Interferometers (MZIs). The MZIs structures collectively show powerful capability in switching an input optical signal to a desired output port from a collection of output ports. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).

  19. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  20. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  1. Within-farm variability in age structure of breeding-female pigs and reproductive performance on commercial swine breeding farms.

    PubMed

    Koketsu, Yuzo

    2005-03-15

    This study investigated relationships between herd age structure and herd productivity in breeding herds; it also investigated a pattern in parity proportions of females over 2 years and its relationship with herd productivity in commercial swine herds. This study was based on data from 148 commercial farms in North America stored in the swine database program at the University of Minnesota. The primary selection criterion was fluctuations in breeding-female pig (female) inventories over a 2-year interval. Productivity measurements and parity proportions of females were extracted from the database. A 24-month time-plot in proportions of Parity 0 and Parities 3-5 females (mid-parity) was charted for each farm. Using these charts, a change in proportions of Parity 0 and mid-parity for each farm was categorized into patterns: FLUCTUATE (Parity 0 and mid-parity proportion lines crossed) or STABLE (the two proportion lines never crossed). Higher proportions of mid-parity sows were correlated with greater pigs weaned per female per year (PWFY; P < 0.01). Farms with a FLUCTUATE (73% of the 148 farms) pattern had lower PWFY than those with a STABLE pattern (P < 0.01). The STABLE farms had higher proportions of mid-parity sows, higher parity at culling, higher frequency of gilt deliveries per year, and lower replacement rate than the FLUCTUATE farms (P < 0.01). In conclusion, maintaining stable subpopulations with mid-parity and Parity 0 are recommended to optimize herd productivity.

  2. My baby, my move: examination of perceived barriers and motivating factors related to antenatal physical activity.

    PubMed

    Leiferman, Jenn; Swibas, Tracy; Koiness, Kacey; Marshall, Julie A; Dunn, Andrea L

    2011-01-01

    Based on a socioecological model, the present study examined multilevel barriers and facilitators related to physical activity engagement during pregnancy in women of low socioeconomic status. Individual and paired interviews were conducted with 25 pregnant women (aged 18-46 years, 17-40 weeks' gestation) to ask about motivational factors and to compare differences in activity level and parity. Atlas/Ti software was used to code verbatim interview transcripts by organizing codes into categories that reflect symbolic domains of meaning, relational patterns, and overarching themes. Perceived barriers and motivating factors differed between exercisers and nonexercisers at intrapersonal, interpersonal, and environmental levels. Future interventions should take into account key motivating multilevel factors and barriers to tailor more meaningful advice for pregnant women. © 2011 by the American College of Nurse-Midwives.

  3. A new relativistic viscous hydrodynamics code and its application to the Kelvin-Helmholtz instability in high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-01

    We construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. We check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken's flow and the Israel-Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin-Helmholtz instability in high-energy heavy-ion collisions.

  4. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  5. Sensor Authentication: Embedded Processor Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svoboda, John

    2012-09-25

    Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking

  6. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  7. A Discussion of Issues in Integrity Constraint Monitoring

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.

    1998-01-01

    In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.

  8. Reducing software security risk through an integrated approach

    NASA Technical Reports Server (NTRS)

    Gilliam, D.; Powell, J.; Kelly, J.; Bishop, M.

    2001-01-01

    The fourth quarter delivery, FY'01 for this RTOP is a Property-Based Testing (PBT), 'Tester's Assistant' (TA). The TA tool is to be used to check compiled and pre-compiled code for potential security weaknesses that could be exploited by hackers. The TA Instrumenter, implemented mostly in C++ (with a small part in Java), parsels two types of files: Java and TASPEC. Security properties to be checked are written in TASPEC. The Instrumenter is used in conjunction with the Tester's Assistant Specification (TASpec)execution monitor to verify the security properties of a given program.

  9. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  10. High-Speed Large-Alphabet Quantum Key Distribution Using Photonic Integrated Circuits

    DTIC Science & Technology

    2014-01-28

    polarizing beam splitter, TDC: time-to-digital converter. Extra&loss& photon/bin frame size QSER secure bpp ECC secure&key&rate& none& 0.0031 64 14...to-digital converter. photon/frame frame size QSER secure bpp ECC secure&key& rate& 1.3 16 9.5 % 2.9 layered LDPC 7.3&Mbps& Figure 24: Operating

  11. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  12. Concurrent hypercube system with improved message passing

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Tuazon, Jesus O. (Inventor); Lieberman, Don (Inventor); Pniel, Moshe (Inventor)

    1989-01-01

    A network of microprocessors, or nodes, are interconnected in an n-dimensional cube having bidirectional communication links along the edges of the n-dimensional cube. Each node's processor network includes an I/O subprocessor dedicated to controlling communication of message packets along a bidirectional communication link with each end thereof terminating at an I/O controlled transceiver. Transmit data lines are directly connected from a local FIFO through each node's communication link transceiver. Status and control signals from the neighboring nodes are delivered over supervisory lines to inform the local node that the neighbor node's FIFO is empty and the bidirectional link between the two nodes is idle for data communication. A clocking line between neighbors, clocks a message into an empty FIFO at a neighbor's node and vica versa. Either neighbor may acquire control over the bidirectional communication link at any time, and thus each node has circuitry for checking whether or not the communication link is busy or idle, and whether or not the receive FIFO is empty. Likewise, each node can empty its own FIFO and in turn deliver a status signal to a neighboring node indicating that the local FIFO is empty. The system includes features of automatic message rerouting, block message transfer and automatic parity checking and generation.

  13. Parity & Untreated Dental Caries in US Women

    PubMed Central

    Russell, S.L.; Ickovics, J.R.; Yaffee, R.A.

    2010-01-01

    While parity (number of children) reportedly is related to tooth loss, the relationship between parity and dental caries has not been extensively investigated. We used path analysis to test a theoretical model that specified that parity influences dental caries levels through dental care, psycho- social factors, and dental health damaging behaviors in 2635 women selected from the NHANES III dataset. We found that while increased parity was not associated with a greater level of total caries (DFS), parity was related to untreated dental caries (DS). The mechanisms by which parity is related to caries, however, remain undefined. Further investigation is warranted to determine if disparities in dental caries among women are due to differences in parity and the likely changes that parallel these reproductive choices. PMID:20631092

  14. Parity & untreated dental caries in US women.

    PubMed

    Russell, S L; Ickovics, J R; Yaffee, R A

    2010-10-01

    While parity (number of children) reportedly is related to tooth loss, the relationship between parity and dental caries has not been extensively investigated. We used path analysis to test a theoretical model that specified that parity influences dental caries levels through dental care, psycho- social factors, and dental health damaging behaviors in 2635 women selected from the NHANES III dataset. We found that while increased parity was not associated with a greater level of total caries (DFS), parity was related to untreated dental caries (DS). The mechanisms by which parity is related to caries, however, remain undefined. Further investigation is warranted to determine if disparities in dental caries among women are due to differences in parity and the likely changes that parallel these reproductive choices.

  15. Separation of the 1+ /1- parity doublet in 20Ne

    NASA Astrophysics Data System (ADS)

    Beller, J.; Stumpf, C.; Scheck, M.; Pietralla, N.; Deleanu, D.; Filipescu, D. M.; Glodariu, T.; Haxton, W.; Idini, A.; Kelley, J. H.; Kwan, E.; Martinez-Pinedo, G.; Raut, R.; Romig, C.; Roth, R.; Rusev, G.; Savran, D.; Tonchev, A. P.; Tornow, W.; Wagner, J.; Weller, H. R.; Zamfir, N.-V.; Zweidinger, M.

    2015-02-01

    The (J , T) = (1 , 1) parity doublet in 20Ne at 11.26 MeV is a good candidate to study parity violation in nuclei. However, its energy splitting is known with insufficient accuracy for quantitative estimates of parity violating effects. To improve on this unsatisfactory situation, nuclear resonance fluorescence experiments using linearly and circularly polarized γ-ray beams were used to determine the energy difference of the parity doublet ΔE = E (1-) - E (1+) = - 3.2(± 0.7) stat(-1.2+0.6)sys keV and the ratio of their integrated cross sections Is,0(+) /Is,0(-) = 29(± 3) stat(-7+14)sys. Shell-model calculations predict a parity-violating matrix element having a value in the range 0.46-0.83 eV for the parity doublet. The small energy difference of the parity doublet makes 20Ne an excellent candidate to study parity violation in nuclear excitations.

  16. Gender bias in scholarly peer review

    PubMed Central

    Helmer, Markus; Schottdorf, Manuel; Neef, Andreas; Battaglia, Demian

    2017-01-01

    Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing. DOI: http://dx.doi.org/10.7554/eLife.21718.001 PMID:28322725

  17. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  18. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  19. Proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification

    NASA Technical Reports Server (NTRS)

    Ewen, Denney, W. (Editor); Jensen, Thomas (Editor)

    2009-01-01

    This NASA conference publication contains the proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification, held as part of LICS in Los Angeles, CA, USA, on August 15, 2009. Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It can build on existing validation and verification (V&V) techniques but introduces the notion of explicit software certificates, Vvilich contain all the information necessary for an independent assessment of the demonstrated properties. One such example is proof-carrying code (PCC) which is an important and distinctive approach to enhancing trust in programs. It provides a practical framework for independent assurance of program behavior; especially where source code is not available, or the code author and user are unknown to each other. The workshop wiII address theoretical foundations of logic-based software certification as well as practical examples and work on alternative application domains. Here "certificate" is construed broadly, to include not just mathematical derivations and proofs but also safety and assurance cases, or any fonnal evidence that supports the semantic analysis of programs: that is, evidence about an intrinsic property of code and its behaviour that can be independently checked by any user, intermediary, or third party. These guarantees mean that software certificates raise trust in the code itself, distinct from and complementary to any existing trust in the creator of the code, the process used to produce it, or its distributor. In addition to the contributed talks, the workshop featured two invited talks, by Kelly Hayhurst and Andrew Appel. The PCC 2009 website can be found at http://ti.arc.nasa.gov /event/pcc 091.

  20. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  1. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  2. Data Race Benchmark Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Chunhua; Lin, Pei-Hung; Asplund, Joshua

    2017-03-21

    This project is a benchmark suite of Open-MP parallel codes that have been checked for data races. The programs are marked to show which do and do not have races. This allows them to be leveraged while testing and developing race detection tools.

  3. State Parity Laws and Access to Treatment for Substance Use Disorder in the United States: Implications for Federal Parity Legislation

    PubMed Central

    Wen, Hefei; Cummings, Janet R.; Hockenberry, Jason M.; Gaydos, Laura M.; Druss, Benjamin G.

    2014-01-01

    Context The passage of the 2008 Mental Health Parity and Addiction Equity Act (MHPAEA) and the 2010 Affordable Care Act (ACA) incorporated parity for substance use disorder (SUD) into federal legislation. Yet prior research provides us with scant evidence as to whether federal parity legislation will hold the potential for improving access to SUD treatment. Objective This study examined the effect of state-level SUD parity laws on state-aggregate SUD treatment rates from 2000 to 2008, to shed light on the impact of the recent federal-level SUD parity legislation. Design A quasi-experimental study design using a two-way (state and year) fixed-effect method Setting and Participants All known specialty SUD treatment facilities in the United States Interventions State-level SUD parity laws between 2000 and 2008 Main Outcome Measures State-aggregate SUD treatment rates in: (1) all specialty SUD treatment facilities, and (2) specialty SUD treatment facilities accepting private insurance Results The implementation of any SUD parity law increased the treatment rate by 9 percent (p<0.01) in all specialty SUD treatment facilities and by 15 percent (p<0.05) in facilities accepting private insurance. Full parity and parity-if-offered (i.e., parity only if SUD coverage is offered) increased SUD treatment rate by 13 percent (p<0.05) and 8 percent (p<0.05) in all facilities, and by 21 percent (p<0.05) and 10 percent (p<0.05) in those accepting private insurance. Conclusions We found a positive effect of the implementation of state SUD parity legislation on access to specialty SUD treatment. Furthermore, the positive association was more pronounced in states with more comprehensive parity laws. Our findings suggest that federal parity legislation holds the potential to improve access to SUD treatment. PMID:24154931

  4. Seeking more Opportunities of Check Dams' harmony with nearby Circumstances via Design Thinking Process

    NASA Astrophysics Data System (ADS)

    Lin, Huan-Chun; Chen, Su-Chin; Tsai, Chen-Chen

    2014-05-01

    The contents of engineering design should indeed contain both science and art fields. However, the art aspect is too less discussed to cause an inharmonic impact with natural surroundings, and so are check dams. This study would like to seek more opportunities of check dams' harmony with nearby circumstances. According to literatures review of philosophy and cognition science fields, we suggest a thinking process of three phases to do check dams design work for reference. The first phase, conceptualization, is to list critical problems, such as the characteristics of erosion or deposition, and translate them into some goal situations. The second phase, transformation, is to use cognition methods such as analogy, association and metaphors to shape an image and prototypes. The third phase, formation, is to decide the details of the construction, such as stable safety analysis of shapes or materials. According to the previous descriptions, Taiwan's technological codes or papers about check dam design mostly emphasize the first and third phases, still quite a few lacks of the second phase. We emphases designers shouldn't ignore any phase of the framework especially the second one, or they may miss some chances to find more suitable solutions. Otherwise, this conceptual framework is simple to apply and we suppose it's a useful tool to design a more harmonic check dam with nearby natural landscape. Key Words: check dams, design thinking process, conceptualization, transformation, formation.

  5. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  6. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  7. Cubic interactions of massless bosonic fields in three dimensions. II. Parity-odd and Chern-Simons vertices

    NASA Astrophysics Data System (ADS)

    Kessel, Pan; Mkrtchyan, Karapet

    2018-05-01

    This work completes the classification of the cubic vertices for arbitrary-spin massless bosons in three dimensions started in a previous companion paper by constructing parity-odd vertices. Similarly to the parity-even case, there is a unique parity-odd vertex for any given triple s1≥s2≥s3≥2 of massless bosons if the triangle inequalities are satisfied (s1

  8. A new relativistic viscous hydrodynamics code and its application to the Kelvin–Helmholtz instability in high-energy heavy-ion collisions

    DOE PAGES

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-09

    Here, we construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We also split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. Furthemore, we check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken’s flow and the Israel–Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin–Helmholtz instability inmore » high-energy heavy-ion collisions.« less

  9. Numerical study of supersonic combustors by multi-block grids with mismatched interfaces

    NASA Technical Reports Server (NTRS)

    Moon, Young J.

    1990-01-01

    A three dimensional, finite rate chemistry, Navier-Stokes code was extended to a multi-block code with mismatched interface for practical calculations of supersonic combustors. To ensure global conservation, a conservative algorithm was used for the treatment of mismatched interfaces. The extended code was checked against one test case, i.e., a generic supersonic combustor with transverse fuel injection, examining solution accuracy, convergence, and local mass flux error. After testing, the code was used to simulate the chemically reacting flow fields in a scramjet combustor with parallel fuel injectors (unswept and swept ramps). Computational results were compared with experimental shadowgraph and pressure measurements. Fuel-air mixing characteristics of the unswept and swept ramps were compared and investigated.

  10. User input verification and test driven development in the NJOY21 nuclear data processing code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Conlin, Jeremy Lloyd; McCartney, Austin Paul

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, andmore » capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.« less

  11. Evaluation of a bar-code system to detect unaccompanied baggage

    DOT National Transportation Integrated Search

    1988-02-01

    The objective of the Unaccompanied Baggage Detection System (UBDS) Project has : been to gain field experience with a system designed to identify passengers who : check baggage for a flight and subsequently fail to board that flight. In the first : p...

  12. Simultaneous message framing and error detection

    NASA Technical Reports Server (NTRS)

    Frey, A. H., Jr.

    1968-01-01

    Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.

  13. Telemetry Standards, RCC Standard 106-17, Chapter 4, Pulse Code Modulation Standards

    DTIC Science & Technology

    2017-07-01

    Frame Structure .............................................................................................. 4-6 4.3.3 Cyclic Redundancy Check (Class...Spectral and BEP Comparisons for NRZ and Bi-phase............................................ A-3 A.4. PCM Frame Structure Examples...4-4 Figure 4-3. PCM Frame Structure .......................................................................................... 4-6

  14. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...

  15. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...

  16. Single event upset protection circuit and method

    DOEpatents

    Wallner, John; Gorder, Michael

    2016-03-22

    An SEU protection circuit comprises first and second storage means for receiving primary and redundant versions, respectively, of an n-bit wide data value that is to be corrected in case of an SEU occurrence; the correction circuit requires that the data value be a 1-hot encoded value. A parity engine performs a parity operation on the n bits of the primary data value. A multiplexer receives the primary and redundant data values and the parity engine output at respective inputs, and is arranged to pass the primary data value to an output when the parity engine output indicates `odd` parity, and to pass the redundant data value to the output when the parity engine output indicates `even` parity. The primary and redundant data values are suitably state variables, and the parity engine is preferably an n-bit wide XOR or XNOR gate.

  17. Nonlinear detection for a high rate extended binary phase shift keying system.

    PubMed

    Chen, Xian-Qing; Wu, Le-Nan

    2013-03-28

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.

  18. Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System

    PubMed Central

    Chen, Xian-Qing; Wu, Le-Nan

    2013-01-01

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034

  19. RAINIER: A simulation tool for distributions of excited nuclear states and cascade fluctuations

    NASA Astrophysics Data System (ADS)

    Kirsch, L. E.; Bernstein, L. A.

    2018-06-01

    A new code has been developed named RAINIER that simulates the γ-ray decay of discrete and quasi-continuum nuclear levels for a user-specified range of energy, angular momentum, and parity including a realistic treatment of level spacing and transition width fluctuations. A similar program, DICEBOX, uses the Monte Carlo method to simulate level and width fluctuations but is restricted in its initial level population algorithm. On the other hand, modern reaction codes such as TALYS and EMPIRE populate a wide range of states in the residual nucleus prior to γ-ray decay, but do not go beyond the use of deterministic functions and therefore neglect cascade fluctuations. This combination of capabilities allows RAINIER to be used to determine quasi-continuum properties through comparison with experimental data. Several examples are given that demonstrate how cascade fluctuations influence experimental high-resolution γ-ray spectra from reactions that populate a wide range of initial states.

  20. Three-dimensional structural analysis using interactive graphics

    NASA Technical Reports Server (NTRS)

    Biffle, J.; Sumlin, H. A.

    1975-01-01

    The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.

  1. Dynamic Detection of Malicious Code in COTS Software

    DTIC Science & Technology

    2000-04-01

    run the following documented hostile applets or ActiveX of these tools work only on mobile code (Java, ActiveX , controls: 16-11 Hostile Applets Tiny...Killer App Exploder Runner ActiveX Check Spy eSafe Protect Desktop 9/9 blocked NB B NB 13/17 blocked NB Surfinshield Online 9/9 blocked NB B B 13/17...Exploder is an ActiveX control top (@). that performs a clean shutdown of your computer. The interface is attractive, although rather complex, as McLain’s

  2. On the RAC. CIOs must work to make their organization's time with CMS Recovery Audit Contractors as painless as possible.

    PubMed

    Lawrence, Daphne

    2009-01-01

    CIOs should act as a team with HIM and Finance to prepare for RAC audits. CIOs can take the lead in looking at improved coding systems, and can be involved in creating policies and procedures for the hospital's RAC team. RAC is an opportunity to improve documentation, coding and data analysis. RAC appeals will become more common as states share lessons learned. Follow the money and check on claims that are frequently returned.

  3. C code generation from Petri-net-based logic controller specification

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei

    2017-08-01

    The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.

  4. Operative team communication during simulated emergencies: Too busy to respond?

    PubMed

    Davis, W Austin; Jones, Seth; Crowell-Kuhnberg, Adrianna M; O'Keeffe, Dara; Boyle, Kelly M; Klainer, Suzanne B; Smink, Douglas S; Yule, Steven

    2017-05-01

    Ineffective communication among members of a multidisciplinary team is associated with operative error and failure to rescue. We sought to measure operative team communication in a simulated emergency using an established communication framework called "closed loop communication." We hypothesized that communication directed at a specific recipient would be more likely to elicit a check back or closed loop response and that this relationship would vary with changes in patients' clinical status. We used the closed loop communication framework to code retrospectively the communication behavior of 7 operative teams (each comprising 2 surgeons, anesthesiologists, and nurses) during response to a simulated, postanesthesia care unit "code blue." We identified call outs, check backs, and closed loop episodes and applied descriptive statistics and a mixed-effects negative binomial regression to describe characteristics of communication in individuals and in different specialties. We coded a total of 662 call outs. The frequency and type of initiation and receipt of communication events varied between clinical specialties (P < .001). Surgeons and nurses initiated fewer and received more communication events than anesthesiologists. For the average participant, directed communication increased the likelihood of check back by at least 50% (P = .021) in periods preceding acute changes in the clinical setting, and exerted no significant effect in periods after acute changes in the clinical situation. Communication patterns vary by specialty during a simulated operative emergency, and the effect of directed communication in eliciting a response depends on the clinical status of the patient. Operative training programs should emphasize the importance of quality communication in the period immediately after an acute change in the clinical setting of a patient and recognize that communication patterns and needs vary between members of multidisciplinary operative teams. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. State parity laws and access to treatment for substance use disorder in the United States: implications for federal parity legislation.

    PubMed

    Wen, Hefei; Cummings, Janet R; Hockenberry, Jason M; Gaydos, Laura M; Druss, Benjamin G

    2013-12-01

    The passage of the 2008 Mental Health Parity and Addiction Equity Act and the 2010 Affordable Care Act incorporated parity for substance use disorder (SUD) treatment into federal legislation. However, prior research provides us with scant evidence as to whether federal parity legislation will hold the potential for improving access to SUD treatment. To examine the effect of state-level SUD parity laws on state-aggregate SUD treatment rates and to shed light on the impact of the recent federal SUD parity legislation. We conducted a quasi-experimental study using a 2-way (state and year) fixed-effect method. We included all known specialty SUD treatment facilities in the United States and examined treatment rates from October 1, 2000, through March 31, 2008. Our main source of data was the National Survey of Substance Abuse Treatment Services, which provides facility-level information on specialty SUD treatment. State-level SUD parity laws during the study period. State-aggregate SUD treatment rates in (1) all specialty SUD treatment facilities and (2) specialty SUD treatment facilities accepting private insurance. The implementation of any SUD parity law increased the treatment rate by 9% (P < .001) in all specialty SUD treatment facilities and by 15% (P = .02) in facilities accepting private insurance. Full parity and parity only if SUD coverage is offered increased the SUD treatment rate by 13% (P = .02) and 8% (P = .04), respectively, in all facilities and by 21% (P = .03) and 10% (P = .04), respectively, in facilities accepting private insurance. We found a positive effect of the implementation of state SUD parity legislation on access to specialty SUD treatment. Furthermore, the positive association is more pronounced in states with more comprehensive parity laws. Our findings suggest that federal parity legislation holds the potential to improve access to SUD treatment.

  6. Higher gravidity and parity are associated with increased prevalence of metabolic syndrome among rural Bangladeshi women.

    PubMed

    Akter, Shamima; Jesmin, Subrina; Rahman, Md Mizanur; Islam, Md Majedul; Khatun, Most Tanzila; Yamaguchi, Naoto; Akashi, Hidechika; Mizutani, Taro

    2013-01-01

    Parity increases the risk for coronary heart disease; however, its association with metabolic syndrome among women in low-income countries is still unknown. This study investigates the association between parity or gravidity and metabolic syndrome in rural Bangladeshi women. A cross-sectional study was conducted in 1,219 women aged 15-75 years from rural Bangladesh. Metabolic syndrome was defined according to the standard NCEP-ATP III criteria. Logistic regression was used to estimate the association between parity and gravidity and metabolic syndrome, with adjustment of potential confounding variables. Subjects with the highest gravidity (> = 4) had 1.66 times higher odds of having metabolic syndrome compared to those in the lowest gravidity (0-1) (P trend = 0.02). A similar association was found between parity and metabolic syndrome (P(trend) = 0.04), i.e., subjects in the highest parity (> = 4) had 1.65 times higher odds of having metabolic syndrome compared to those in the lowest parity (0-1). This positive association of parity and gravidity with metabolic syndrome was confined to pre-menopausal women (P(trend) <0.01). Among the components of metabolic syndrome only high blood pressure showed positive association with parity and gravidity (P(trend) = 0.01 and <0.001). Neither Parity nor gravidity was appreciably associated with other components of metabolic syndrome. Multi parity or gravidity may be a risk factor for metabolic syndrome.

  7. Lake water quality mapping from LANDSAT

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.

    1977-01-01

    The lakes in three LANDSAT scenes were mapped by the Bendix MDAS multispectral analysis system. Field checking the maps by three separate individuals revealed approximately 90-95% correct classification for the lake categories selected. Variations between observers was about 5%. From the MDAS color coded maps the lake with the worst algae problem was easily located. This lake was closely checked and a pollution source of 100 cows was found in the springs which fed this lake. The theory, lab work and field work which made it possible for this demonstration project to be a practical lake classification procedure are presented.

  8. Permutation parity machines for neural cryptography.

    PubMed

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  9. Permutation parity machines for neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyes, Oscar Mauricio; Escuela de Ingenieria Electrica, Electronica y Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga; Zimmermann, Karl-Heinz

    2010-06-15

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  10. Linking high parity and maternal and child mortality: what is the impact of lower health services coverage among higher order births?

    PubMed

    Sonneveldt, Emily; DeCormier Plosky, Willyanne; Stover, John

    2013-01-01

    A number of data sets show that high parity births are associated with higher child mortality than low parity births. The reasons for this relationship are not clear. In this paper we investigate whether high parity is associated with lower coverage of key health interventions that might lead to increased mortality. We used DHS data from 10 high fertility countries to examine the relationship between parity and coverage for 8 child health intervention and 9 maternal health interventions. We also used the LiST model to estimate the effect on maternal and child mortality of the lower coverage associated with high parity births. Our results show a significant relationship between coverage of maternal and child health services and birth order, even when controlling for poverty. The association between coverage and parity for maternal health interventions was more consistently significant across countries all countries, while for child health interventions there were fewer overall significant relationships and more variation both between and within countries. The differences in coverage between children of parity 3 and those of parity 6 are large enough to account for a 12% difference in the under-five mortality rate and a 22% difference in maternal mortality ratio in the countries studied. This study shows that coverage of key health interventions is lower for high parity children and the pattern is consistent across countries. This could be a partial explanation for the higher mortality rates associated with high parity. Actions to address this gap could help reduce the higher mortality experienced by high parity birth.

  11. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  12. NDEC: A NEA platform for nuclear data testing, verification and benchmarking

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Michel-Sendis, F.; Cabellos, O.; Bossant, M.; Soppera, N.

    2017-09-01

    The selection, testing, verification and benchmarking of evaluated nuclear data consists, in practice, in putting an evaluated file through a number of checking steps where different computational codes verify that the file and the data it contains complies with different requirements. These requirements range from format compliance to good performance in application cases, while at the same time physical constraints and the agreement with experimental data are verified. At NEA, the NDEC (Nuclear Data Evaluation Cycle) platform aims at providing, in a user friendly interface, a thorough diagnose of the quality of a submitted evaluated nuclear data file. Such diagnose is based on the results of different computational codes and routines which carry out the mentioned verifications, tests and checks. NDEC also searches synergies with other existing NEA tools and databases, such as JANIS, DICE or NDaST, including them into its working scheme. Hence, this paper presents NDEC, its current development status and its usage in the JEFF nuclear data project.

  13. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  14. Monitoring Java Programs with Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

  15. [The DRG responsible physician in trauma and orthopedic surgery. Surgeon, encoder, and link to medical controlling].

    PubMed

    Ruffing, T; Huchzermeier, P; Muhm, M; Winkler, H

    2014-05-01

    Precise coding is an essential requirement in order to generate a valid DRG. The aim of our study was to evaluate the quality of the initial coding of surgical procedures, as well as to introduce our "hybrid model" of a surgical specialist supervising medical coding and a nonphysician for case auditing. The department's DRG responsible physician as a surgical specialist has profound knowledge both in surgery and in DRG coding. At a Level 1 hospital, 1000 coded cases of surgical procedures were checked. In our department, the DRG responsible physician who is both a surgeon and encoder has proven itself for many years. The initial surgical DRG coding had to be corrected by the DRG responsible physician in 42.2% of cases. On average, one hour per working day was necessary. The implementation of a DRG responsible physician is a simple, effective way to connect medical and business expertise without interface problems. Permanent feedback promotes both medical and economic sensitivity for the improvement of coding quality.

  16. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  17. The influence of state-level policy environments on the activation of the Medicaid SBIRT reimbursement codes.

    PubMed

    Hinde, Jesse; Bray, Jeremy; Kaiser, David; Mallonee, Erin

    2017-02-01

    To examine how institutional constraints, comprising federal actions and states' substance abuse policy environments, influence states' decisions to activate Medicaid reimbursement codes for screening and brief intervention for risky substance use in the United States. A discrete-time duration model was used to estimate the effect of institutional constraints on the likelihood of activating the Medicaid reimbursement codes. Primary constraints included federal Screening, Brief Intervention and Referral to Treatment (SBIRT) grant funding, substance abuse priority, economic climate, political climate and interstate diffusion. Study data came from publicly available secondary data sources. Federal SBIRT grant funding did not affect significantly the likelihood of activation (P = 0.628). A $1 increase in per-capita block grant funding was associated with a 10-percentage point reduction in the likelihood of activation (P = 0.003) and a $1 increase in per-capita state substance use disorder expenditures was associated with a 2-percentage point increase in the likelihood of activation (P = 0.004). States with enacted parity laws (P = 0.016) and a Democratic-controlled state government were also more likely to activate the codes. In the United States, the determinants of state activation of Medicaid Screening, Brief Intervention and Referral to Treatment (SBIRT) reimbursement codes are complex, and include more than financial considerations. Federal block grant funding is a strong disincentive to activating the SBIRT reimbursement codes, while more direct federal SBIRT grant funding has no detectable effects. © 2017 Society for the Study of Addiction.

  18. Higher Gravidity and Parity Are Associated with Increased Prevalence of Metabolic Syndrome among Rural Bangladeshi Women

    PubMed Central

    Akter, Shamima; Jesmin, Subrina; Rahman, Md. Mizanur; Islam, Md. Majedul; Khatun, Most. Tanzila; Yamaguchi, Naoto; Akashi, Hidechika; Mizutani, Taro

    2013-01-01

    Background Parity increases the risk for coronary heart disease; however, its association with metabolic syndrome among women in low-income countries is still unknown. Objective This study investigates the association between parity or gravidity and metabolic syndrome in rural Bangladeshi women. Methods A cross-sectional study was conducted in 1,219 women aged 15–75 years from rural Bangladesh. Metabolic syndrome was defined according to the standard NCEP-ATP III criteria. Logistic regression was used to estimate the association between parity and gravidity and metabolic syndrome, with adjustment of potential confounding variables. Results Subjects with the highest gravidity (> = 4) had 1.66 times higher odds of having metabolic syndrome compared to those in the lowest gravidity (0-1) (P trend = 0.02). A similar association was found between parity and metabolic syndrome (P trend = 0.04), i.e., subjects in the highest parity (> = 4) had 1.65 times higher odds of having metabolic syndrome compared to those in the lowest parity (0-1). This positive association of parity and gravidity with metabolic syndrome was confined to pre-menopausal women (P trend <0.01). Among the components of metabolic syndrome only high blood pressure showed positive association with parity and gravidity (P trend = 0.01 and <0.001). Neither Parity nor gravidity was appreciably associated with other components of metabolic syndrome. Conclusions Multi parity or gravidity may be a risk factor for metabolic syndrome. PMID:23936302

  19. The relationship between parity and overweight varies with household wealth and national development.

    PubMed

    Kim, Sonia A; Yount, Kathryn M; Ramakrishnan, Usha; Martorell, Reynaldo

    2007-02-01

    Recent studies support a positive relationship between parity and overweight among women of developing countries; however, it is unclear whether these effects vary by household wealth and national development. Our objective was to determine whether the association between parity and overweight [body mass index (BMI) > or =25 kg/m(2)] in women living in developing countries varies with levels of national human development and/or household wealth. We used data from 28 nationally representative, cross-sectional surveys conducted between 1996 and 2003 (n = 275 704 women, 15-49 years). The relationship between parity and overweight was modelled using logistic regression, controlling for several biological and sociodemographic factors and national development, as reflected by the United Nations' Human Development Index. We also modelled the interaction between parity and national development, and the three-way interaction between parity, household wealth and national development. Parity had a weak, positive association with overweight, which varied by household wealth and national development. Among the poorest women and women in the second tertile of household wealth, parity was positively related to overweight only in the most developed countries. Among the wealthiest women, parity was positively related to overweight regardless of the level of national development. As development increases, the burden of parity-related overweight shifts to include poor as well as wealthy women. In the least-developed countries, programmes to prevent parity-related overweight should target wealthy women, whereas such programmes should be provided to all women in more developed countries.

  20. Investigating Sociodemographic Disparities in Cancer Risk Using Web-Based Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Tourassi, Georgia

    Cancer health disparities due to demographic and socioeconomic factors are an area of great interest in the epidemiological community. Adjusting for such factors is important when developing cancer risk models. However, for digital epidemiology studies relying on online sources such information is not readily available. This paper presents a novel method for extracting demographic and socioeconomic information from openly available online obituaries. The method relies on tailored language processing rules and a probabilistic scheme to map subjects’ occupation history to the occupation classification codes and related earnings provided by the U.S. Census Bureau. Using this information, a case-control study ismore » executed fully in silico to investigate how age, gender, parity, and income level impact breast and lung cancer risk. Based on 48,368 online obituaries (4,643 for breast cancer, 6,274 for lung cancer, and 37,451 cancer-free) collected automatically and a generalized cancer risk model, our study shows strong association between age, parity, and socioeconomic status and cancer risk. Although for breast cancer the observed trends are very consistent with traditional epidemiological studies, some inconsistency is observed for lung cancer with respect to socioeconomic status.« less

Top