Sample records for regular ldpc codes

  1. Construction of a new regular LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng

    2013-05-01

    A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

  2. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  3. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  4. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  5. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  6. Protograph LDPC Codes for the Erasure Channel

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  7. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  8. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  9. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  10. An efficient decoding for low density parity check codes

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.

  11. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  12. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  13. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  14. On the optimum signal constellation design for high-speed optical transport networks.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2012-08-27

    In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.

  15. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  16. Evaluation of four-dimensional nonbinary LDPC-coded modulation for next-generation long-haul optical transport networks.

    PubMed

    Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B

    2012-04-09

    Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.

  17. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  18. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  19. Design of ACM system based on non-greedy punctured LDPC codes

    NASA Astrophysics Data System (ADS)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  20. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  1. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  2. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  3. RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2012-11-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  4. PMD mitigation through interleaving LDPC codes with polarization scramblers

    NASA Astrophysics Data System (ADS)

    Han, Dahai; Chen, Haoran; Xi, Lixia

    2013-09-01

    The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.

  5. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  6. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  7. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    PubMed

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  8. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  9. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  10. FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2015-06-01

    In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.

  11. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  12. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  13. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  14. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  15. Experimental research and comparison of LDPC and RS channel coding in ultraviolet communication systems.

    PubMed

    Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin

    2014-03-10

    We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.

  16. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  17. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  18. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  19. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  20. High-throughput GPU-based LDPC decoding

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  1. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  2. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  3. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  4. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    ERIC Educational Resources Information Center

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  5. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  6. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    PubMed

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  7. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  8. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †

    PubMed Central

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-01-01

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675

  9. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.

    PubMed

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-06-29

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.

  10. Ultra high speed optical transmission using subcarrier-multiplexed four-dimensional LDPC-coded modulation.

    PubMed

    Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted

    2010-09-13

    We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).

  11. Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks

    DTIC Science & Technology

    2010-08-31

    Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6

  12. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  13. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  14. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  15. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  16. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  17. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  18. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  19. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  20. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  1. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  2. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  3. The application of LDPC code in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  4. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  5. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  6. Experimental demonstration of the transmission performance for LDPC-coded multiband OFDM ultra-wideband over fiber system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu

    2015-01-01

    To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.

  7. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  8. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  9. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  10. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  11. Integrated Performance of Next Generation High Data Rate Receiver and AR4JA LDPC Codec for Space Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis

    2008-01-01

    Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.

  12. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  13. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  14. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  15. PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.

    PubMed

    Djordjevic, Ivan B

    2007-04-02

    The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.

  16. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  17. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  18. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  19. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  20. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  1. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  2. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  3. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  4. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  5. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  6. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  7. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  8. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  9. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  10. Finite-connectivity spin-glass phase diagrams and low-density parity check codes.

    PubMed

    Migliorini, Gabriele; Saad, David

    2006-02-01

    We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.

  11. LDPC Codes--Structural Analysis and Decoding Techniques

    ERIC Educational Resources Information Center

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  12. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  13. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  14. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.

  15. PSEUDO-CODEWORD LANDSCAPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHERTKOV, MICHAEL; STEPANOV, MIKHAIL

    2007-01-10

    The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less

  16. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  17. Channel coding for underwater acoustic single-carrier CDMA communication system

    NASA Astrophysics Data System (ADS)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  18. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  19. Transmission over UWB channels with OFDM system using LDPC coding

    NASA Astrophysics Data System (ADS)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  20. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.

  1. Strategic and Tactical Decision-Making Under Uncertainty

    DTIC Science & Technology

    2006-01-03

    message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

  2. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    NASA Astrophysics Data System (ADS)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  3. Pilotless Frame Synchronization Using LDPC Code Constraints

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Vissasenor, John

    2009-01-01

    A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.

  4. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  5. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  6. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber.

    PubMed

    Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian

    2016-05-30

    Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.

  7. On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Xu, Lei; Wang, Ting

    2010-12-01

    Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.

  8. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  9. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  10. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  11. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  12. 428-Gb/s single-channel coherent optical OFDM transmission over 960-km SSMF with constellation expansion and LDPC coding.

    PubMed

    Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William

    2010-08-02

    High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.

  13. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  14. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  15. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  16. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  17. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  18. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  19. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    PubMed

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  20. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  1. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  2. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  3. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  4. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  5. Performance analysis of LDPC codes on OOK terahertz wireless channels

    NASA Astrophysics Data System (ADS)

    Chun, Liu; Chang, Wang; Jun-Cheng, Cao

    2016-02-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).

  6. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  7. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  8. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  9. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  10. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    PubMed

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  11. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    PubMed Central

    Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724

  12. An LDPC decoder architecture for wireless sensor network applications.

    PubMed

    Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.

  13. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  14. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  15. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  16. Two-stage cross-talk mitigation in an orbital-angular-momentum-based free-space optical communication system.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2017-08-15

    We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.

  17. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  18. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  19. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  20. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    NASA Astrophysics Data System (ADS)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  1. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  2. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  3. Bilayer Protograph Codes for Half-Duplex Relay Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.

  4. Nonlinear Demodulation and Channel Coding in EBPSK Scheme

    PubMed Central

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281

  5. Nonlinear demodulation and channel coding in EBPSK scheme.

    PubMed

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.

  6. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  7. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  8. Joint Schemes for Physical Layer Security and Error Correction

    ERIC Educational Resources Information Center

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  9. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  10. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    NASA Astrophysics Data System (ADS)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  11. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  12. Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong

    2016-12-01

    In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.

  13. MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Daoud, Omar; Alani, Omar

    This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.

  14. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  15. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  16. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  17. Measurement Techniques for Clock Jitter

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  18. A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam

    2005-01-01

    We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.

  19. Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage

    NASA Technical Reports Server (NTRS)

    Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.

    2010-01-01

    The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.

  20. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  1. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  2. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)

    DTIC Science & Technology

    2014-10-01

    SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test

  3. 500  Gb/s free-space optical transmission over strong atmospheric turbulence channels.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2016-07-15

    We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.

  4. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  5. Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations

    DTIC Science & Technology

    2017-11-14

    1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave

  6. Research on Formation of Microsatellite Communication with Genetic Algorithm

    PubMed Central

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796

  7. Research on formation of microsatellite communication with genetic algorithm.

    PubMed

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  8. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  9. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  10. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  11. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  12. Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors

    DTIC Science & Technology

    2012-02-29

    filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider

  13. Joint Carrier-Phase Synchronization and LDPC Decoding

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban

    2009-01-01

    A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).

  14. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  15. System on a Chip Real-Time Emulation (SOCRE)

    DTIC Science & Technology

    2006-09-01

    code ) i Table of Contents Preface...emulation platform included LDPC decoders, A/V and radio applications Port BEE flow to Emulation Platforms, SOC Technologies One of the key tasks of the...Once the design has been described within Simulink, the designer runs the BEE design flow within Matlab using the bee_xps interface. At this point

  16. Irreducible normalizer operators and thresholds for degenerate quantum codes with sublinear distances

    NASA Astrophysics Data System (ADS)

    Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.

    2015-03-01

    We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.

  17. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  18. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  19. High performance and cost effective CO-OFDM system aided by polar code.

    PubMed

    Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng

    2017-02-06

    A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.

  20. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  1. Future capabilities for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Berner, J. B.; Bryant, S. H.; Andrews, K. S.

    2004-01-01

    This paper will look at three new capabilities that are in different stages of development. First, turbo decoding, which provides improved telemetry performance for data rates up to about 1 Mbps, will be discussed. Next, pseudo-noise ranging will be presented. Pseudo-noise ranging has several advantages over the current sequential ranging, anmely easier operations, improved performance, and the capability to be used in a regenerative implementation on a spacecraft. Finally, Low Density Parity Check decoding will be discussed. LDPC codes can provide performance that matches or slightly exceed turbo codes, but are designed for use in the 10 Mbps range.

  2. 16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.

    PubMed

    Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2012-05-21

    We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.

  3. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    PubMed

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  4. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  5. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  6. Design space exploration of high throughput finite field multipliers for channel coding on Xilinx FPGAs

    NASA Astrophysics Data System (ADS)

    de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.

    2012-09-01

    Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.

  7. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  8. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  9. High-Performance CCSDS AOS Protocol Implementation in FPGA

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson

    2010-01-01

    The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.

  10. SCaN Network Ground Station Receiver Performance for Future Service Support

    NASA Technical Reports Server (NTRS)

    Estabrook, Polly; Lee, Dennis; Cheng, Michael; Lau, Chi-Wung

    2012-01-01

    Objectives: Examine the impact of providing the newly standardized CCSDS Low Density Parity Check (LDPC) codes to the SCaN return data service on the SCaN SN and DSN ground stations receivers: SN Current Receiver: Integrated Receiver (IR). DSN Current Receiver: Downlink Telemetry and Tracking (DTT) Receiver. Early Commercial-Off-The-Shelf (COTS) prototype of the SN User Service Subsystem Component Replacement (USS CR) Narrow Band Receiver. Motivate discussion of general issues of ground station hardware design to enable simple and cheap modifications for support of future services.

  11. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  12. Mechanisms of lectin and antibody-dependent polymorphonuclear leukocyte-mediated cytolysis.

    PubMed

    Tsunawaki, S; Ikenami, M; Mizuno, D; Yamazaki, M

    1983-04-01

    The mechanisms of tumor lysis by polymorphonuclear leukocytes (PMNs) were investigated. In antibody-dependent PMN-mediated cytolysis (ADPC), sensitized tumor cells were specifically lysed via Fc receptors on PMNs. On the other hand, lectin-dependent PMN-mediated cytolysis (LDPC) caused nonspecific lysis of several murine tumors after recognition of carbohydrate moieties on the cell membrane of both PMNs and tumor cells. Both ADPC and LDPC depended on glycolysis, and cytotoxicity was mediated by reactive oxygen species; LDPC was dependent on superoxide and ADPC on the myeloperoxidase system. The participation of reactive oxygen species in PMN cytotoxicity was also demonstrated by pharmacological triggering with phorbol myristate acetate. These results indicate that reactive oxygen species have an important role In tumor killing by PMNs and that ADPC and LDPC have partly different cytolytic processes as well as different recognition steps.

  13. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  14. NASA Tech Briefs, October 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.

  15. Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication

    NASA Astrophysics Data System (ADS)

    Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn

    2014-11-01

    Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.

  16. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  17. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  18. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    NASA Astrophysics Data System (ADS)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  19. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  20. Analysis of soft-decision FEC on non-AWGN channels.

    PubMed

    Cho, Junho; Xie, Chongjin; Winzer, Peter J

    2012-03-26

    Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.

  1. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    PubMed

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  2. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  3. Characterization of vibrissa germinative cells: transition of cell types.

    PubMed

    Osada, A; Kobayashi, K

    2001-12-01

    Germinative cells, small cell masses attached to the stalks of dermal papillae that are able to differentiate into the hair shaft and inner root sheath, form follicular bulb-like structures when co-cultured with dermal papilla cells. We studied the growth characteristics of germinative cells to determine the cell types in the vibrissa germinative tissue. Germinative tissues, attaching to dermal papillae, were cultured on 3T3 feeder layers. The cultured keratinocytes were harvested and transferred, equally and for two passages, onto lined dermal papilla cells (LDPC) and/or 3T3 feeder layers. The resulting germinative cells were classified into three types in the present experimental condition. Type 1 cells grow very well on either feeder layer, whereas Type 3 cells scarcely grow on either feeder layer. Type 2 cells are very conspicuous and are reversible. They grow well on 3T3 but growth is suppressed on LDPC feeder layers. The Type 2 cells that grow well on 3T3 feeder layers, however, are suppressed when transferred onto LDPC and the Type 2 cells that are suppressed on LDPC begin to grow again on 3T3. The transition of one cell type to another in vitro and the cell types that these germinative cell types correspond to in vivo is discussed. It was concluded that stem cells or their close progenitors reside in the germinative tissues of the vibrissa bulb except at late anagen-early catagen.

  4. On the photonic implementation of universal quantum gates, bell states preparation circuit and quantum LDPC encoders and decoders based on directional couplers and HNLF.

    PubMed

    Djordjevic, Ivan B

    2010-04-12

    The Bell states preparation circuit is a basic circuit required in quantum teleportation. We describe how to implement it in all-fiber technology. The basic building blocks for its implementation are directional couplers and highly nonlinear optical fiber (HNLF). Because the quantum information processing is based on delicate superposition states, it is sensitive to quantum errors. In order to enable fault-tolerant quantum computing the use of quantum error correction is unavoidable. We show how to implement in all-fiber technology encoders and decoders for sparse-graph quantum codes, and provide an illustrative example to demonstrate this implementation. We also show that arbitrary set of universal quantum gates can be implemented based on directional couplers and HNLFs.

  5. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  6. Direct-detection Free-space Laser Transceiver Test-bed

    NASA Technical Reports Server (NTRS)

    Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli; hide

    2008-01-01

    NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.

  7. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  8. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  9. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    PubMed

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.

  10. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less

  11. Implementation of continuous-variable quantum key distribution with discrete modulation

    NASA Astrophysics Data System (ADS)

    Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2017-06-01

    We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.

  12. NASA Tech Briefs, April 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.

  13. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.

    PubMed

    van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum

    2015-09-01

    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.

  14. Phonological Codes Constrain Output of Orthographic Codes via Sublexical and Lexical Routes in Chinese Written Production

    PubMed Central

    Wang, Cheng; Zhang, Qingfang

    2015-01-01

    To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes. PMID:25879662

  15. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  16. PPLN-waveguide-based polarization entangled QKD simulator

    NASA Astrophysics Data System (ADS)

    Gariano, John; Djordjevic, Ivan B.

    2017-08-01

    We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.

  17. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  18. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  19. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    PubMed

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  20. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  1. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  2. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  3. Directed educational training improves coding and billing skills for residents.

    PubMed

    Benke, James R; Lin, Sandra Y; Ishman, Stacey L

    2013-03-01

    To determine if coding and billing acumen improves after a single directed educational training session. Case-control series. Fourteen otolaryngology practitioners including trainees each completed two clinical scenarios before and after a directed educational session covering basic skills and common mistakes in otolaryngology billing and coding. Ten practitioners had never coded before; while, four regularly billed and coded in a clinical setting. Individuals with no previous billing experience had a mean score of 54% (median 55%) before the educational session which was significantly lower than that of the experienced billers who averaged 82% (median 83%, p=0.002). After the educational billing and coding session, the inexperienced billers mean score improved to 62% (median, 67%) which was still statistically lower than that of the experienced billers who averaged 76% (median 75%, p=0.039). The inexperienced billers demonstrated a significant improvement in their total score after the intervention (P=0.019); however, the change observed in experienced billers before and after the educational intervention was not significant (P=0.469). Billing and coding skill was improved after a single directed education session. Residents, who are not responsible for regular billing and coding, were found to have the greatest improvement in skill. However, providers who regularly bill and code had no significant improvement after this session. These data suggest that a single 90min billing and coding education session is effective in preparing those with limited experience to competently bill and code. Copyright © 2012. Published by Elsevier Ireland Ltd.

  4. 5 CFR 532.221 - Industries included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Industries included in regular... CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.221 Industries... American Industry Classification System (NAICS) codes in all regular nonappropriated fund wage surveys...

  5. Statistical Learning as a Key to Cracking Chinese Orthographic Codes

    ERIC Educational Resources Information Center

    He, Xinjie; Tong, Xiuli

    2017-01-01

    This study examines statistical learning as a mechanism for Chinese orthographic learning among children in Grades 3-5. Using an artificial orthography, children were repeatedly exposed to positional, phonetic, and semantic regularities of radicals. Children showed statistical learning of all three regularities. Regularities' levels of consistency…

  6. Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation.

    PubMed

    Djordjevic, Ivan B

    2011-07-18

    In order to achieve multi-gigabit transmission (projected for 2020) for the use in interplanetary communications, the usage of large number of time slots in pulse-position modulation (PPM), typically used in deep-space applications, is needed, which imposes stringent requirements on system design and implementation. As an alternative satisfying high-bandwidth demands of future interplanetary communications, while keeping the system cost and power consumption reasonably low, in this paper, we describe the use of orbital angular momentum (OAM) as an additional degree of freedom. The OAM is associated with azimuthal phase of the complex electric field. Because OAM eigenstates are orthogonal the can be used as basis functions for N-dimensional signaling. The OAM modulation and multiplexing can, therefore, be used, in combination with other degrees of freedom, to solve the high-bandwidth requirements of future deep-space and near-Earth optical communications. The main challenge for OAM deep-space communication represents the link between a spacecraft probe and the Earth station because in the presence of atmospheric turbulence the orthogonality between OAM states is no longer preserved. We will show that in combination with LDPC codes, the OAM-based modulation schemes can operate even under strong atmospheric turbulence regime. In addition, the spectral efficiency of proposed scheme is N2/log2N times better than that of PPM.

  7. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  8. Complex sparse spatial filter for decoding mixed frequency and phase coded steady-state visually evoked potentials.

    PubMed

    Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul

    2018-07-01

    Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Breaking the Code of Silence.

    ERIC Educational Resources Information Center

    Halbig, Wolfgang W.

    2000-01-01

    Schools and communities must break the adolescent code of silence concerning threats of violence. Schools need character education stressing courage, caring, and responsibility; regular discussions of the school discipline code; formal security discussions with parents; 24-hour hotlines; and protocols for handling reports of potential violence.…

  10. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. The effects of articulatory suppression on word recognition in Serbian.

    PubMed

    Tenjović, Lazar; Lalović, Dejan

    2005-11-01

    The relatedness of phonological coding to the articulatory mechanisms in visual word recognition vary in different writing systems. While articulatory suppression (i.e., continuous verbalising during a visual word processing task) has a detrimental effect on the processing of Japanese words printed in regular syllabic Khana script, it has no such effect on the processing of irregular alphabetic English words. Besner (1990) proposed an experiment in the Serbian language, written in Cyrillic and Roman regular but alphabetic scripts, to disentangle the importance of script regularity vs. the syllabic-alphabetic dimension for the effects observed. Articulatory suppression had an equally detrimental effect in a lexical decision task for both alphabetically regular and distorted (by a mixture of the two alphabets) Serbian words, but comparisons of articulatory suppression effect size obtained in Serbian to those obtained in English and Japanese suggest "alphabeticity-syllabicity" to be the more critical dimension in determining the relatedness of phonological coding and articulatory activity.

  12. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  13. Continuous operation of four-state continuous-variable quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2016-10-01

    We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.

  14. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  15. Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)

    NASA Technical Reports Server (NTRS)

    Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.

    2012-01-01

    The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.

  16. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  17. Hypothesis of Lithocoding: Origin of the Genetic Code as a "Double Jigsaw Puzzle" of Nucleobase-Containing Molecules and Amino Acids Assembled by Sequential Filling of Apatite Mineral Cellules.

    PubMed

    Skoblikow, Nikolai E; Zimin, Andrei A

    2016-05-01

    The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".

  18. Low-dose cardio-respiratory phase-correlated cone-beam micro-CT of small animals.

    PubMed

    Sawall, Stefan; Bergner, Frank; Lapp, Robert; Mronz, Markus; Karolczak, Marek; Hess, Andreas; Kachelriess, Marc

    2011-03-01

    Micro-CT imaging of animal hearts typically requires a double gating procedure because scans during a breath-hold are not possible due to the long scan times and the high respiratory rates, Simultaneous respiratory and cardiac gating can either be done prospectively or retrospectively. True five-dimensional information can be either retrieved with retrospective gating or with prospective gating if several prospective gates are acquired. In any case, the amount of information available to reconstruct one volume for a given respiratory and cardiac phase is orders of magnitud lower than the total amount of information acquired. For example, the reconstruction of a volume from a 10% wide respiratory and a 20% wide cardiac window uses only 2% of the data acquired. Achieving a similar image quality as a nongated scan would therefore require to increase the amount of data and thereby the dose to the animal by up to a factor of 50. To achieve the goal of low-dose phase-correlated (LDPC) imaging, the authors propose to use a highly efficient combination of slightly modified existing algorithms. In particular, the authors developed a variant of the McKinnon-Bates image reconstruction algorithm and combined it with bilateral filtering in up to five dimensions to significantly reduce image noise without impairing spatial or temporal resolution. The preliminary results indicate that the proposed LDPC reconstruction method typically reduces image noise by a factor of up to 6 (e.g., from 170 to 30 HU), while the dose values lie in a range from 60 to 500 mGy. Compared to other publications that apply 250-1800 mGy for the same task [C. T. Badea et al., "4D micro-CT of the mouse heart," Mol. Imaging 4(2), 110-116 (2005); M. Drangova et al., "Fast retrospectively gated quantitative four-dimensional (4D) cardiac micro computed tomography imaging of free-breathing mice," Invest. Radiol. 42(2), 85-94 (2007); S. H. Bartling et al., "Retrospective motion gating in small animal CT of mice and rats," Invest. Radiol. 42(10), 704-714 (2007)], the authors' LDPC approach therefore achieves a more than tenfold dose usage improvement. The LDPC reconstruction method improves phase-correlated imaging from highly undersampled data. Artifacts caused by sparse angular sampling are removed and the image noise is decreased, while spatial and temporal resolution are preserved. Thus, the administered dose per animal can be decreased allowing for long-term studies with reduced metabolic inference.

  19. On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations

    NASA Astrophysics Data System (ADS)

    Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.

    2015-08-01

    The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.

  20. DVB-S2 Experiment over NASA's Space Network

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.

    2017-01-01

    The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.

  1. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    PubMed

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  2. Uniform emergency codes: will they improve safety?

    PubMed

    2005-01-01

    There are pros and cons to uniform code systems, according to emergency medicine experts. Uniformity can be a benefit when ED nurses and other staff work at several facilities. It's critical that your staff understand not only what the codes stand for, but what they must do when codes are called. If your state institutes a new system, be sure to hold regular drills to familiarize your ED staff.

  3. Code-Switching in Judaeo-Arabic Documents from the Cairo Geniza

    ERIC Educational Resources Information Center

    Wagner, Esther-Miriam; Connolly, Magdalen

    2018-01-01

    This paper investigates code-switching and script-switching in medieval documents from the Cairo Geniza, written in Judaeo-Arabic (Arabic in Hebrew script), Hebrew, Arabic and Aramaic. Legal documents regularly show a macaronic style of Judaeo-Arabic, Aramaic and Hebrew, while in letters code-switching from Judaeo-Arabic to Hebrew is tied in with…

  4. The Nuremberg Code-A critique.

    PubMed

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics.

  5. 77 FR 76078 - Regular Board of Directors Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ..., DC 20005. STATUS: Open. CONTACT PERSON FOR MORE INFORMATION: Erica Hall, Assistant Corporate... Regular Board of Directors Meeting Minutes IV. Approval of the Finance, Budget & Program Committee Meeting... Corporate Secretary. [FR Doc. 2012-31163 Filed 12-21-12; 4:15 pm] BILLING CODE 7570-02-P ...

  6. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  7. The Nuremberg Code–A critique

    PubMed Central

    Ghooi, Ravindra B.

    2011-01-01

    The Nuremberg Code drafted at the end of the Doctor’s trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics. PMID:21731859

  8. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2012-04-01 2011-04-01 true Obtaining a regular (non-emergency) order of protection...

  9. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2014-04-01 2014-04-01 false Obtaining a regular (non-emergency) order of protection...

  10. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... custody of any children involved when appropriate and provide for visitation rights, child support, and... 25 Indians 1 2013-04-01 2013-04-01 false Obtaining a regular (non-emergency) order of protection...

  11. Disruption of hierarchical predictive coding during sleep

    PubMed Central

    Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas

    2015-01-01

    When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555

  12. High-Speed Large-Alphabet Quantum Key Distribution Using Photonic Integrated Circuits

    DTIC Science & Technology

    2014-01-28

    polarizing beam splitter, TDC: time-to-digital converter. Extra&loss& photon/bin frame size QSER secure bpp ECC secure&key&rate& none& 0.0031 64 14...to-digital converter. photon/frame frame size QSER secure bpp ECC secure&key& rate& 1.3 16 9.5 % 2.9 layered LDPC 7.3&Mbps& Figure 24: Operating

  13. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    NASA Astrophysics Data System (ADS)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  14. Sensory Coding by Cerebellar Mossy Fibres through Inhibition-Driven Phase Resetting and Synchronisation

    PubMed Central

    Holtzman, Tahl; Jörntell, Henrik

    2011-01-01

    Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297

  15. 25 CFR 11.1210 - Duration and renewal of a regular protection order.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Duration and renewal of a regular protection order. 11.1210 Section 11.1210 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11.1210...

  16. 75 FR 47452 - Civilian Health and Medical Program of the Uniformed Services (CHAMPUS); TRICARE Retired Reserve...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... the Retired Reserve who are qualified for non-regular retirement, but are not yet 60 years of age, to qualify to purchase medical coverage equivalent to the TRICARE Standard (and Extra) benefit unless that... Code. Section 1076e allows members of the Retired Reserve who are qualified for non-regular retirement...

  17. 25 CFR 11.1210 - Duration and renewal of a regular protection order.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Duration and renewal of a regular protection order. 11.1210 Section 11.1210 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11.1210...

  18. American School Counselor Association Ethical Code Changes Relevant to Family Work

    ERIC Educational Resources Information Center

    Bodenhorn, Nancy

    2005-01-01

    Professional organizations regularly review and revise their codes of ethics. The American School Counselor Association (ASCA) completed this task in June 2004. At the 2004 national conference, the leadership team, consisting of state presidents, past presidents, and president elects, voted to adopt the changes. These changes, as outlined in Table…

  19. Nonlinear detection for a high rate extended binary phase shift keying system.

    PubMed

    Chen, Xian-Qing; Wu, Le-Nan

    2013-03-28

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.

  20. Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System

    PubMed Central

    Chen, Xian-Qing; Wu, Le-Nan

    2013-01-01

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034

  1. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    NASA Astrophysics Data System (ADS)

    Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe

    2015-05-01

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.

  2. Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening

    PubMed Central

    Kamimura, HAS; Wang, S; Wu, S-Y; Karakatsani, ME; Acosta, C; Carneiro, AAO; Konofagou, EE

    2015-01-01

    Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n=15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38±5.71 mm3, 8.91±3.91 mm3 and 35.47 ± 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40±28.43 V.s, 63.87±29.97 V.s and 356.52±257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue. PMID:26394091

  3. Multiplier Architecture for Coding Circuits

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.

    1986-01-01

    Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.

  4. Optimal design of FIR triplet halfband filter bank and application in image coding.

    PubMed

    Kha, H H; Tuan, H D; Nguyen, T Q

    2011-02-01

    This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.

  5. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  6. NASA Tech Briefs, September 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    opics covered include: Filtering Water by Use of Ultrasonically Vibrated Nanotubes; Computer Code for Nanostructure Simulation; Functionalizing CNTs for Making Epoxy/CNT Composites; Improvements in Production of Single-Walled Carbon Nanotubes; Progress Toward Sequestering Carbon Nanotubes in PmPV; Two-Stage Variable Sample-Rate Conversion System; Estimating Transmitted-Signal Phase Variations for Uplink Array Antennas; Board Saver for Use with Developmental FPGAs; Circuit for Driving Piezoelectric Transducers; Digital Synchronizer without Metastability; Compact, Low-Overhead, MIL-STD-1553B Controller; Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM; Differential InP HEMT MMIC Amplifiers Embedded in Waveguides; Improved Aerogel Vacuum Thermal Insulation; Fluoroester Co-Solvents for Low-Temperature Li+ Cells; Using Volcanic Ash to Remove Dissolved Uranium and Lead; High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell; Silicon Wafer-Scale Substrate for Microshutters and Detector Arrays; Micro-Horn Arrays for Ultrasonic Impedance Matching; Improved Controller for a Three-Axis Piezoelectric Stage; Nano-Pervaporation Membrane with Heat Exchanger Generates Medical-Grade Water; Micro-Organ Devices; Nonlinear Thermal Compensators for WGM Resonators; Dynamic Self-Locking of an OEO Containing a VCSEL; Internal Water Vapor Photoacoustic Calibration; Mid-Infrared Reflectance Imaging of Thermal-Barrier Coatings; Improving the Visible and Infrared Contrast Ratio of Microshutter Arrays; Improved Scanners for Microscopic Hyperspectral Imaging; Rate-Compatible LDPC Codes with Linear Minimum Distance; PrimeSupplier Cross-Program Impact Analysis and Supplier Stability Indicator Simulation Model; Integrated Planning for Telepresence With Time Delays; Minimizing Input-to-Output Latency in Virtual Environment; Battery Cell Voltage Sensing and Balancing Using Addressable Transformers; Gaussian and Lognormal Models of Hurricane Gust Factors; Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft; Integrated Modeling of Spacecraft Touch-and-Go Sampling; Spacecraft Station-Keeping Trajectory and Mission Design Tools; Efficient Model-Based Diagnosis Engine; and DSN Simulator.

  7. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  8. Inclusion of pressure and flow in a new 3D MHD equilibrium code

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel; Fukuyama, Atsushi

    2012-10-01

    Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).

  9. CFD analysis of turbopump volutes

    NASA Technical Reports Server (NTRS)

    Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken

    1993-01-01

    An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.

  10. Exposing Vital Forensic Artifacts of USB Devices in the Windows 10 Registry

    DTIC Science & Technology

    2015-06-01

    12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Digital media devices are regularly seized pursuant to criminal investigations and...ABSTRACT Digital media devices are regularly seized pursuant to criminal investigations and Microsoft Windows is the most commonly encountered... digital footprints available on seized computers that assist in re-creating a crime scene and telling the story of the events that occurred. Part of this

  11. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  12. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  13. QR Codes as Mobile Learning Tools for Labor Room Nurses at the San Pablo Colleges Medical Center

    ERIC Educational Resources Information Center

    Del Rosario-Raymundo, Maria Rowena

    2017-01-01

    Purpose: The purpose of this paper is to explore the use of QR codes as mobile learning tools and examine factors that impact on their usefulness, acceptability and feasibility in assisting the nurses' learning. Design/Methodology/Approach: Study participants consisted of 14 regular, full-time, board-certified LR nurses. Over a two-week period,…

  14. Current Research on Non-Coding Ribonucleic Acid (RNA).

    PubMed

    Wang, Jing; Samuels, David C; Zhao, Shilin; Xiang, Yu; Zhao, Ying-Yong; Guo, Yan

    2017-12-05

    Non-coding ribonucleic acid (RNA) has without a doubt captured the interest of biomedical researchers. The ability to screen the entire human genome with high-throughput sequencing technology has greatly enhanced the identification, annotation and prediction of the functionality of non-coding RNAs. In this review, we discuss the current landscape of non-coding RNA research and quantitative analysis. Non-coding RNA will be categorized into two major groups by size: long non-coding RNAs and small RNAs. In long non-coding RNA, we discuss regular long non-coding RNA, pseudogenes and circular RNA. In small RNA, we discuss miRNA, transfer RNA, piwi-interacting RNA, small nucleolar RNA, small nuclear RNA, Y RNA, single recognition particle RNA, and 7SK RNA. We elaborate on the origin, detection method, and potential association with disease, putative functional mechanisms, and public resources for these non-coding RNAs. We aim to provide readers with a complete overview of non-coding RNAs and incite additional interest in non-coding RNA research.

  15. CSTEM User Manual

    NASA Technical Reports Server (NTRS)

    Hartle, M.; McKnight, R. L.

    2000-01-01

    This manual is a combination of a user manual, theory manual, and programmer manual. The reader is assumed to have some previous exposure to the finite element method. This manual is written with the idea that the CSTEM (Coupled Structural Thermal Electromagnetic-Computer Code) user needs to have a basic understanding of what the code is actually doing in order to properly use the code. For that reason, the underlying theory and methods used in the code are described to a basic level of detail. The manual gives an overview of the CSTEM code: how the code came into existence, a basic description of what the code does, and the order in which it happens (a flowchart). Appendices provide a listing and very brief description of every file used by the CSTEM code, including the type of file it is, what routine regularly accesses the file, and what routine opens the file, as well as special features included in CSTEM.

  16. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  17. 78 FR 57525 - Suspension of Community Eligibility

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is..., Susp.. *do = Ditto. Code for reading third column: Emerg. --Emergency; Reg. --Regular; Susp. --Susp...

  18. 78 FR 69001 - Suspension of Community Eligibility

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is.... *-do- =Ditto. Code for reading third column: Emerg.--Emergency; Reg.--Regular; Susp.--Suspension. Dated...

  19. 77 FR 53775 - Suspension of Community Eligibility

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-04

    ... participation status of a community can be obtained from FEMA's Community Status Book (CSB). The CSB is...; September 5, 2012, Susp.. *do = Ditto. Code for reading third column: Emerg.--Emergency; Reg.--Regular; Susp...

  20. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  1. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  2. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  3. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    PubMed

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  4. Functional interrogation of non-coding DNA through CRISPR genome editing

    PubMed Central

    Canver, Matthew C.; Bauer, Daniel E.; Orkin, Stuart H.

    2017-01-01

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. PMID:28288828

  5. The Role of the National Training Center during Full Mobilization

    DTIC Science & Technology

    1991-06-07

    resources are proposed by this study. 14. SUBJECT TERMS 15. NUMBER OF PAGES 217 National Training Center (NTC); Training; Mobilization; Combat 16. PRICE ... Price Code, Enter appropriate price Block 8. Performina Oraanization Report code (NTIS only). Number, Enter the unique alphanumeric report number(s...Regular Army and a transfer of their roles to the Reserve Component. The end of the Cold War makes future mobilization needs less likely and argues for

  6. Routines.

    DTIC Science & Technology

    1985-05-01

    ADAlS481 ROUTINES(U), NASSACNUSETTS INST OF TECH CANIRIDGE 1in ARTIFICIAL INTELLIGENCE LAS P E AfiRE NAY 85 AI- N -129 NSSS±4-7?-C-S3S9 UNCLSSIFIED F/G...Process representation 3 0. AGSTRAC? (Cal~ N 411 P011110 @do it 0600 01 MMRN& uIV.i eONlm- Regularities in the world give rise to regularities in, the way...sourcc code can wreak suhstantihd havoc. ............---... ... . . . . ........." ’.pa ". --.-. .. n . .’....p a’’. .. ’ .. ’-’..... ’..... 1

  7. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  8. Astrophysics Source Code Library: Incite to Cite!

    NASA Astrophysics Data System (ADS)

    DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.

    2014-05-01

    The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  9. Cervical vertebral maturation: An objective and transparent code staging system applied to a 6-year longitudinal investigation.

    PubMed

    Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca

    2017-05-01

    To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  10. Kangaroo – A pattern-matching program for biological sequences

    PubMed Central

    2002-01-01

    Background Biologists are often interested in performing a simple database search to identify proteins or genes that contain a well-defined sequence pattern. Many databases do not provide straightforward or readily available query tools to perform simple searches, such as identifying transcription binding sites, protein motifs, or repetitive DNA sequences. However, in many cases simple pattern-matching searches can reveal a wealth of information. We present in this paper a regular expression pattern-matching tool that was used to identify short repetitive DNA sequences in human coding regions for the purpose of identifying potential mutation sites in mismatch repair deficient cells. Results Kangaroo is a web-based regular expression pattern-matching program that can search for patterns in DNA, protein, or coding region sequences in ten different organisms. The program is implemented to facilitate a wide range of queries with no restriction on the length or complexity of the query expression. The program is accessible on the web at http://bioinfo.mshri.on.ca/kangaroo/ and the source code is freely distributed at http://sourceforge.net/projects/slritools/. Conclusion A low-level simple pattern-matching application can prove to be a useful tool in many research settings. For example, Kangaroo was used to identify potential genetic targets in a human colorectal cancer variant that is characterized by a high frequency of mutations in coding regions containing mononucleotide repeats. PMID:12150718

  11. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  12. 5 CFR 532.223 - Establishments included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... employees in the prescribed industries within a survey area must be included in the survey universe. Establishments in NAICS codes 4471, 4542, 71391, and 71395 must be included in the survey universe if they have...

  13. 5 CFR 532.223 - Establishments included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... employees in the prescribed industries within a survey area must be included in the survey universe. Establishments in NAICS codes 4471, 4542, 71391, and 71395 must be included in the survey universe if they have...

  14. 5 CFR 532.223 - Establishments included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... employees in the prescribed industries within a survey area must be included in the survey universe. Establishments in NAICS codes 4471, 4542, 71391, and 71395 must be included in the survey universe if they have...

  15. 5 CFR 532.223 - Establishments included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... employees in the prescribed industries within a survey area must be included in the survey universe. Establishments in NAICS codes 4471, 4542, 71391, and 71395 must be included in the survey universe if they have...

  16. 5 CFR 532.223 - Establishments included in regular nonappropriated fund surveys.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... employees in the prescribed industries within a survey area must be included in the survey universe. Establishments in NAICS codes 4471, 4542, 71391, and 71395 must be included in the survey universe if they have...

  17. Functional interrogation of non-coding DNA through CRISPR genome editing.

    PubMed

    Canver, Matthew C; Bauer, Daniel E; Orkin, Stuart H

    2017-05-15

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. On Frequency Offset Estimation Using the iNET Preamble in Frequency Selective Fading Channels

    DTIC Science & Technology

    2014-03-01

    ASM fields; (bottom) the relationship between the indexes of the received samples r(n), the signal samples s(n), the preamble samples p (n) and the short...frequency offset estimators for SOQPSK-TG equipped with the iNET preamble and operating in ISI channels. Four of the five estimators exam - ined here are...sync marker ( ASM ), and data bits (an LDPC codeword). The availability of a preamble introduces the possibility of data-aided synchro- nization in

  19. 77 FR 37446 - Advisory Committee on the Medical Uses of Isotopes: Meeting Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-21

    ...; and (6) update on domestic production of molybdenum-99. The regular meeting agenda is subject to..., Advisory Committee Management Officer. [FR Doc. 2012-15173 Filed 6-20-12; 8:45 am] BILLING CODE 7590-01-P ...

  20. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  1. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  2. Embedding QR codes in tumor board presentations, enhancing educational content for oncology information management.

    PubMed

    Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark

    2011-01-01

    Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.

  3. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    PubMed

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  4. 77 FR 28786 - Disaster Assistance; Crisis Counseling Regular Program; Amendment to Regulation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... individuals can call the National Suicide Prevention Lifeline at 1-800-273-TALK or via the Web at http://www.suicidepreventionlifeline.org . Callers are routed to a suicide prevention call center near them based on the area code from...

  5. 76 FR 59768 - Office of Commercial Space Transportation (AST); Notice of Availability and Request for Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... with the National Environmental Policy Act (NEPA) of 1969, 42 United States Code 4321-4347 (as amended... regular business hours at the following location: McGinley Memorial Library, 317 Main Street, McGregor, TX...

  6. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    NASA Astrophysics Data System (ADS)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  7. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  8. POLYSHIFT Communications Software for the Connection Machine System CM-200

    DOE PAGES

    George, William; Brickner, Ralph G.; Johnsson, S. Lennart

    1994-01-01

    We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less

  9. Determination of the turbulence integral model parameters for a case of a coolant angular flow in regular rod-bundle

    NASA Astrophysics Data System (ADS)

    Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph

    2017-11-01

    Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.

  10. Signal optimization and analysis using PASSER V-07 : training workshop: code IPR006.

    DOT National Transportation Integrated Search

    2011-01-01

    The objective of this project was to conduct one pilot workshop and five regular workshops to teach the effective use of the enhanced PASSER V-07 arterial signal timing optimization software. PASSER V-07 and materials for conducting a one-day trainin...

  11. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  12. Bijective transformation circular codes and nucleotide exchanging RNA transcription.

    PubMed

    Michel, Christian J; Seligmann, Hervé

    2014-04-01

    The C(3) self-complementary circular code X identified in genes of prokaryotes and eukaryotes is a set of 20 trinucleotides enabling reading frame retrieval and maintenance, i.e. a framing code (Arquès and Michel, 1996; Michel, 2012, 2013). Some mitochondrial RNAs correspond to DNA sequences when RNA transcription systematically exchanges between nucleotides (Seligmann, 2013a,b). We study here the 23 bijective transformation codes ΠX of X which may code nucleotide exchanging RNA transcription as suggested by this mitochondrial observation. The 23 bijective transformation codes ΠX are C(3) trinucleotide circular codes, seven of them are also self-complementary. Furthermore, several correlations are observed between the Reading Frame Retrieval (RFR) probability of bijective transformation codes ΠX and the different biological properties of ΠX related to their numbers of RNAs in GenBank's EST database, their polymerization rate, their number of amino acids and the chirality of amino acids they code. Results suggest that the circular code X with the functions of reading frame retrieval and maintenance in regular RNA transcription, may also have, through its bijective transformation codes ΠX, the same functions in nucleotide exchanging RNA transcription. Associations with properties such as amino acid chirality suggest that the RFR of X and its bijective transformations molded the origins of the genetic code's machinery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. 5 CFR 532.417 - Within-grade increases.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Armed Forces, in the Regular or Reserve Corps of the Public Health Service after June 30, 1960, or as a... 532.417 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING... Code. This does not apply to prevailing rate employees within a Department of Defense or Coast Guard...

  14. On Quality and Measures in Software Engineering

    ERIC Educational Resources Information Center

    Bucur, Ion I.

    2006-01-01

    Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…

  15. ATHENA 3D: A finite element code for ultrasonic wave propagation

    NASA Astrophysics Data System (ADS)

    Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.

    2014-04-01

    The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.

  16. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    PubMed Central

    Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Zhai, Ruifang

    2018-01-01

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency. PMID:29734793

  17. Generic effective source for scalar self-force calculations

    NASA Astrophysics Data System (ADS)

    Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter

    2012-05-01

    A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.

  18. Numerical ‘health check’ for scientific codes: the CADNA approach

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.

    2007-04-01

    Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.

  19. Surface micromachined counter-meshing gears discrimination device

    DOEpatents

    Polosky, Marc A.; Garcia, Ernest J.; Allen, James J.

    2000-12-12

    A surface micromachined Counter-Meshing Gears (CMG) discrimination device which functions as a mechanically coded lock. Each of two CMG has a first portion of its perimeter devoted to continuous driving teeth that mesh with respective pinion gears. Each EMG also has a second portion of its perimeter devoted to regularly spaced discrimination gear teeth that extend outwardly on at least one of three levels of the CMG. The discrimination gear teeth are designed so as to pass each other without interference only if the correct sequence of partial rotations of the CMG occurs in response to a coded series of rotations from the pinion gears. A 24 bit code is normally input to unlock the device. Once unlocked, the device provides a path for an energy or information signal to pass through the device. The device is designed to immediately lock up if any portion of the 24 bit code is incorrect.

  20. Sandia National Laboratories analysis code data base

    NASA Astrophysics Data System (ADS)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  1. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  2. Advanced Code-Division Multiplexers for Superconducting Detector Arrays

    NASA Astrophysics Data System (ADS)

    Irwin, K. D.; Cho, H. M.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Niemack, M. D.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Vale, L. R.

    2012-06-01

    Multiplexers based on the modulation of superconducting quantum interference devices are now regularly used in multi-kilopixel arrays of superconducting detectors for astrophysics, cosmology, and materials analysis. Over the next decade, much larger arrays will be needed. These larger arrays require new modulation techniques and compact multiplexer elements that fit within each pixel. We present a new in-focal-plane code-division multiplexer that provides multiplexing elements with the required scalability. This code-division multiplexer uses compact lithographic modulation elements that simultaneously multiplex both signal outputs and superconducting transition-edge sensor (TES) detector bias voltages. It eliminates the shunt resistor used to voltage bias TES detectors, greatly reduces power dissipation, allows different dc bias voltages for each TES, and makes all elements sufficiently compact to fit inside the detector pixel area. These in-focal plane code-division multiplexers can be combined with multi-GHz readout based on superconducting microresonators to scale to even larger arrays.

  3. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  4. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory

    ERIC Educational Resources Information Center

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different…

  5. Investigating Argumentation in Reading Groups: Combining Manual Qualitative Coding and Automated Corpus Analysis Tools

    ERIC Educational Resources Information Center

    O'Halloran, Kieran

    2011-01-01

    This article makes a contribution to understanding informal argumentation by focusing on the discourse of reading groups. Reading groups, an important cultural phenomenon in Britain and other countries, regularly meet in members' houses, in pubs or restaurants, in bookshops, workplaces, schools or prisons to share their experiences of reading…

  6. 76 FR 47647 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-05

    ... Code allows an individual engaged in a farming business to elect to reduce his or her regular tax liability by treating all or a portion of the current year's farming income as if it had been earned in... of automated collection techniques or other forms of information technology; and (e) estimates of...

  7. 76 FR 12369 - Notice of Lodging of Consent Decree Under the Clean Water Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-07

    ..., creating a database to track information relevant to compliance efforts, conducting regular internal and... notice. Comments should be addressed to the Assistant Attorney General, Environment and Natural Resources..., Environment & Natural Resources Division. [FR Doc. 2011-5017 Filed 3-4-11; 8:45 am] BILLING CODE 4410-15-P ...

  8. 5 CFR 532.509 - Pay for Sunday work.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Pay for Sunday work. 532.509 Section 532... SYSTEMS Premium Pay and Differentials § 532.509 Pay for Sunday work. A wage employee whose regular work... entitled to additional pay under the provisions of section 5544 of title 5, United States Code. [46 FR...

  9. Categories for Observing Language Arts Instruction (COLAI).

    ERIC Educational Resources Information Center

    Benterud, Julianna G.

    Designed to study individual use of time spent in reading during regularly scheduled language arts instruction in a natural classroom setting, this coding sheet consists of nine categories: (1) engagement, (2) area of language arts, (3) instructional setting, (4) partner (teacher or pupil(s)), (5) source of content, (6) type of unit, (7) assigned…

  10. Cognitive and metacognitive activity in mathematical problem solving: prefrontal and parietal patterns.

    PubMed

    Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M

    2011-03-01

    Students were taught an algorithm for solving a new class of mathematical problems. Occasionally in the sequence of problems, they encountered exception problems that required that they extend the algorithm. Regular and exception problems were associated with different patterns of brain activation. Some regions showed a Cognitive pattern of being active only until the problem was solved and no difference between regular or exception problems. Other regions showed a Metacognitive pattern of greater activity for exception problems and activity that extended into the post-solution period, particularly when an error was made. The Cognitive regions included some of parietal and prefrontal regions associated with the triple-code theory of (Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506) and associated with algebra equation solving in the ACT-R theory (Anderson, J. R. (2005). Human symbol manipulation within an 911 integrated cognitive architecture. Cognitive science, 29, 313-342. Metacognitive regions included the superior prefrontal gyrus, the angular gyrus of the triple-code theory, and frontopolar regions.

  11. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  12. A framework for employing femtosatellites in planetary science missions, including a proposed mission concept for Titan

    NASA Astrophysics Data System (ADS)

    Perez, Tracie Renea Conn

    Over the past 15 years, there has been a growing interest in femtosatellites, a class of tiny satellites having mass less than 100 grams. Research groups from Peru, Spain, England, Canada, and the United States have proposed femtosat designs and novel mission concepts for them. In fact, Peru made history in 2013 by releasing the first - and still only - femtosat tracked from LEO. However, femtosatellite applications in interplanetary missions have yet to be explored in detail. An interesting operations concept would be for a space probe to release numerous femtosatellites into orbit around a planetary object of interest, thereby augmenting the overall data collection capability of the mission. A planetary probe releasing hundreds of femtosats could complete an in-situ, simultaneous 3D mapping of a physical property of interest, achieving scientific investigations not possible for one probe operating alone. To study the technical challenges associated with such a mission, a conceptual mission design is proposed where femtosats are deployed from a host satellite orbiting Titan. The conceptual mission objective is presented: to study Titan's dynamic atmosphere. Then, the design challenges are addressed in turn. First, any science payload measurements that the femtosats provide are only useful if their corresponding locations can be determined. Specifically, what's required is a method of position determination for femtosatellites operating beyond Medium Earth Orbit and therefore beyond the help of GPS. A technique is presented which applies Kalman filter techniques to Doppler shift measurements, allowing for orbit determination of the femtosats. Several case studies are presented demonstrating the usefulness of this approach. Second, due to the inherit power and computational limitations in a femtosatellite design, establishing a radio link between each chipsat and the mothersat will be difficult. To provide a mathematical gain, a particular form of forward error correction (FEC) method called low-density parity-check (LDPC) codes is recommended. A specific low-complexity encoder, and accompanying decoder, have been implemented in the open-source software radio library, GNU Radio. Simulation results demonstrating bit error rate (BER) improvement are presented. Hardware for implementing the LDPC methods in a benchtop test are described and future work on this topic is suggested. Third, the power and spatial constraints of femtosatellite designs likely restrict the payload to one or two sensors. Therefore, it is desired to extract as much useful scientific data as possible from secondary sources, such as radiometric data. Estimating the atmospheric density model from different measurement sources is simulated; results are presented. The overall goal for this effort is to advance the field of miniature spacecraft-based technology and to highlight the advantages of using femtosatellites in future planetary exploration missions. By addressing several subsystem design challenges in this context, such a femtosat mission concept is one step closer to being feasible.

  13. Regularization of the double period method for experimental data processing

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  14. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  15. “Do your homework…and then hope for the best”: the challenges that medical tourism poses to Canadian family physicians’ support of patients’ informed decision-making

    PubMed Central

    2013-01-01

    Background Medical tourism—the practice where patients travel internationally to privately access medical care—may limit patients’ regular physicians’ abilities to contribute to the informed decision-making process. We address this issue by examining ways in which Canadian family doctors’ typical involvement in patients’ informed decision-making is challenged when their patients engage in medical tourism. Methods Focus groups were held with family physicians practicing in British Columbia, Canada. After receiving ethics approval, letters of invitation were faxed to family physicians in six cities. 22 physicians agreed to participate and focus groups ranged from two to six participants. Questions explored participants’ perceptions of and experiences with medical tourism. A coding scheme was created using inductive and deductive codes that captured issues central to analytic themes identified by the investigators. Extracts of the coded data that dealt with informed decision-making were shared among the investigators in order to identify themes. Four themes were identified, all of which dealt with the challenges that medical tourism poses to family physicians’ abilities to support medical tourists’ informed decision-making. Findings relevant to each theme were contrasted against the existing medical tourism literature so as to assist in understanding their significance. Results Four key challenges were identified: 1) confusion and tensions related to the regular domestic physician’s role in decision-making; 2) tendency to shift responsibility related to healthcare outcomes onto the patient because of the regular domestic physician’s reduced role in shared decision-making; 3) strains on the patient-physician relationship and corresponding concern around the responsibility of the foreign physician; and 4) regular domestic physicians’ concerns that treatments sought abroad may not be based on the best available medical evidence on treatment efficacy. Conclusions Medical tourism is creating new challenges for Canadian family physicians who now find themselves needing to carefully negotiate their roles and responsibilities in the informed decision-making process of their patients who decide to seek private treatment abroad as medical tourists. These physicians can and should be educated to enable their patients to look critically at the information available about medical tourism providers and to ask critical questions of patients deciding to access care abroad. PMID:24053385

  16. "Do your homework…and then hope for the best": the challenges that medical tourism poses to Canadian family physicians' support of patients' informed decision-making.

    PubMed

    Snyder, Jeremy; Crooks, Valorie A; Johnston, Rory; Dharamsi, Shafik

    2013-09-22

    Medical tourism-the practice where patients travel internationally to privately access medical care-may limit patients' regular physicians' abilities to contribute to the informed decision-making process. We address this issue by examining ways in which Canadian family doctors' typical involvement in patients' informed decision-making is challenged when their patients engage in medical tourism. Focus groups were held with family physicians practicing in British Columbia, Canada. After receiving ethics approval, letters of invitation were faxed to family physicians in six cities. 22 physicians agreed to participate and focus groups ranged from two to six participants. Questions explored participants' perceptions of and experiences with medical tourism. A coding scheme was created using inductive and deductive codes that captured issues central to analytic themes identified by the investigators. Extracts of the coded data that dealt with informed decision-making were shared among the investigators in order to identify themes. Four themes were identified, all of which dealt with the challenges that medical tourism poses to family physicians' abilities to support medical tourists' informed decision-making. Findings relevant to each theme were contrasted against the existing medical tourism literature so as to assist in understanding their significance. Four key challenges were identified: 1) confusion and tensions related to the regular domestic physician's role in decision-making; 2) tendency to shift responsibility related to healthcare outcomes onto the patient because of the regular domestic physician's reduced role in shared decision-making; 3) strains on the patient-physician relationship and corresponding concern around the responsibility of the foreign physician; and 4) regular domestic physicians' concerns that treatments sought abroad may not be based on the best available medical evidence on treatment efficacy. Medical tourism is creating new challenges for Canadian family physicians who now find themselves needing to carefully negotiate their roles and responsibilities in the informed decision-making process of their patients who decide to seek private treatment abroad as medical tourists. These physicians can and should be educated to enable their patients to look critically at the information available about medical tourism providers and to ask critical questions of patients deciding to access care abroad.

  17. Clusters in irregular areas and lattices.

    PubMed

    Wieczorek, William F; Delmerico, Alan M; Rogerson, Peter A; Wong, David W S

    2012-01-01

    Geographic areas of different sizes and shapes of polygons that represent counts or rate data are often encountered in social, economic, health, and other information. Often political or census boundaries are used to define these areas because the information is available only for those geographies. Therefore, these types of boundaries are frequently used to define neighborhoods in spatial analyses using geographic information systems and related approaches such as multilevel models. When point data can be geocoded, it is possible to examine the impact of polygon shape on spatial statistical properties, such as clustering. We utilized point data (alcohol outlets) to examine the issue of polygon shape and size on visualization and statistical properties. The point data were allocated to regular lattices (hexagons and squares) and census areas for zip-code tabulation areas and tracts. The number of units in the lattices was set to be similar to the number of tract and zip-code areas. A spatial clustering statistic and visualization were used to assess the impact of polygon shape for zip- and tract-sized units. Results showed substantial similarities and notable differences across shape and size. The specific circumstances of a spatial analysis that aggregates points to polygons will determine the size and shape of the areal units to be used. The irregular polygons of census units may reflect underlying characteristics that could be missed by large regular lattices. Future research to examine the potential for using a combination of irregular polygons and regular lattices would be useful.

  18. Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.

    PubMed

    Rowe, Michael H; Neiman, Alexander B

    2012-01-24

    We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    PubMed

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  1. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... act of domestic violence occurred, the court may issue an order of protection. The order must meet the... committed the act of domestic violence to refrain from acts or threats of violence against the petitioner or...

  2. 78 FR 20670 - Office of the Director, National Institutes of Health, Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ..., Room 151. The meeting will include a discussion of policies and procedures that apply to the regular... attendance limited to space available. Individuals who plan to attend and need special assistance, such as..., Office of Federal Advisory Committee Policy. [FR Doc. 2013-07906 Filed 4-4-13; 8:45 am] BILLING CODE 4140...

  3. Can Mismatch Negativity Be Linked to Synaptic Processes? A Glutamatergic Approach to Deviance Detection

    ERIC Educational Resources Information Center

    Strelnikov, Kuzma

    2007-01-01

    This article aims to provide a theoretical framework to elucidate the neurophysiological underpinnings of deviance detection as reflected by mismatch negativity. A six-step model of the information processing necessary for deviance detection is proposed. In this model, predictive coding of learned regularities is realized by means of long-term…

  4. 25 CFR 11.1206 - Obtaining a regular (non-emergency) order of protection.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Child Protection and Domestic Violence Procedures § 11... act of domestic violence occurred, the court may issue an order of protection. The order must meet the... committed the act of domestic violence to refrain from acts or threats of violence against the petitioner or...

  5. Unraveling fabrication and calibration of wearable gas monitor for use under free-living conditions.

    PubMed

    Yue Deng; Cheng Chen; Tsow, Francis; Xiaojun Xian; Forzani, Erica

    2016-08-01

    Volatile organic compounds (VOC) are organic chemicals that have high vapor pressure at regular conditions. Some VOC could be dangerous to human health, therefore it is important to determine real-time indoor and outdoor personal exposures to VOC. To achieve this goal, our group has developed a wearable gas monitor with a complete sensor fabrication and calibration protocol for free-living conditions. Correction factors for calibrating the sensors, including sensitivity, aging effect, and temperature effect are implemented into a Quick Response Code (QR code), so that the pre-calibrated quartz tuning fork (QTF) sensor can be used with the wearable monitor under free-living conditions.

  6. Benchmarking of a motion sensing system for medical imaging and radiotherapy

    NASA Astrophysics Data System (ADS)

    Barnes, Peter J.; Baldock, Clive; Meikle, Steven R.; Fulton, Roger R.

    2008-10-01

    We have tested the performance of an Optotrak Certus system, which optically tracks multiple markers, in both position and time. To do this, we have developed custom code which enables a range of testing protocols, and make this code available to the community. We find that the Certus' positional accuracy is very high, around 20 µm at a distance of 2.8 m. In contrast, we find that its timing accuracy is typically no better than around 5-10% for typical data rates, whether one is using an ethernet connection or a dedicated SCSI link from the system to a host computer. However, with our code we are able to attach very accurate timestamps to the data frames, and in cases where regularly-spaced data are not an absolute requirement, this will be more than adequate.

  7. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  8. BBC users manual. [In LRLTRAN for CDC 7600 and STAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ltterst, R. F.; Sutcliffe, W. G.; Warshaw, S. I.

    1977-11-01

    BBC is a two-dimensional, multifluid Eulerian hydro-radiation code based on KRAKEN and some subsequent ideas. It was developed in the explosion group in T-Division as a basic two-dimensional code to which various types of physics can be added. For this reason BBC is a FORTRAN (LRLTRAN) code. In order to gain the 2-to-1 to 4-to-1 speed advantage of the STACKLIB software on the 7600's and to be able to execute at high speed on the STAR, the vector extensions of LRLTRAN (STARTRAN) are used throughout the code. Either cylindrical- or slab-type problems can be run on BBC. The grid ismore » bounded by a rectangular band of boundary zones. The interfaces between the regular and boundary zones can be selected to be either rigid or nonrigid. The setup for BBC problems is described in the KEG Manual and LEG Manual. The difference equations are described in BBC Hydrodynamics. Basic input and output for BBC are described.« less

  9. Comparative testing of radiographic testing, ultrasonic testing and phased array advanced ultrasonic testing non destructive testing techniques in accordance with the AWS D1.5 bridge welding code : [summary].

    DOT National Transportation Integrated Search

    2014-02-01

    To ensure that Florida bridges remain safe and structurally secure for their 50-year-plus service life, they are inspected regularly. For steel bridges, welds critical to the bridges integrity do not even leave the workshop unless they meet rigoro...

  10. A Guide for the Management of Special Education Programs. 1.0 Program Organization. Newday Operations Guide for Drug Dependent Minor Programs.

    ERIC Educational Resources Information Center

    Santa Cruz County Superintendent of Schools, CA.

    Presented is the first component, Program Organization, of a special day class educational program emphasizing rehabilitation, remedial instruction, and return to regular school programs for drug dependent minors. Included are statistics on drug use in California and the administrative code under which drug dependent minors are eligible for…

  11. Joint reconstruction of dynamic PET activity and kinetic parametric images using total variation constrained dictionary sparse coding

    NASA Astrophysics Data System (ADS)

    Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng

    2017-05-01

    Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.

  12. Diabetes Mellitus Coding Training for Family Practice Residents.

    PubMed

    Urse, Geraldine N

    2015-07-01

    Although physicians regularly use numeric coding systems such as the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to describe patient encounters, coding errors are common. One of the most complicated diagnoses to code is diabetes mellitus. The ICD-9-CM currently has 39 separate codes for diabetes mellitus; this number will be expanded to more than 50 with the introduction of ICD-10-CM in October 2015. To assess the effect of a 1-hour focused presentation on ICD-9-CM codes on diabetes mellitus coding. A 1-hour focused lecture on the correct use of diabetes mellitus codes for patient visits was presented to family practice residents at Doctors Hospital Family Practice in Columbus, Ohio. To assess resident knowledge of the topic, a pretest and posttest were given to residents before and after the lecture, respectively. Medical records of all patients with diabetes mellitus who were cared for at the hospital 6 weeks before and 6 weeks after the lecture were reviewed and compared for the use of diabetes mellitus ICD-9 codes. Eighteen residents attended the lecture and completed the pretest and posttest. The mean (SD) percentage of correct answers was 72.8% (17.1%) for the pretest and 84.4% (14.6%) for the posttest, for an improvement of 11.6 percentage points (P≤.035). The percentage of total available codes used did not substantially change from before to after the lecture, but the use of the generic ICD-9-CM code for diabetes mellitus type II controlled (250.00) declined (58 of 176 [33%] to 102 of 393 [26%]) and the use of other codes increased, indicating a greater variety in codes used after the focused lecture. After a focused lecture on diabetes mellitus coding, resident coding knowledge improved. Review of medical record data did not reveal an overall change in the number of diabetic codes used after the lecture but did reveal a greater variety in the codes used.

  13. Cormack Research Project: Glasgow University

    NASA Technical Reports Server (NTRS)

    Skinner, Susan; Ryan, James M.

    1998-01-01

    The aim of this project was to investigate and improve upon existing methods of analysing data from COMITEL on the Gamma Ray Observatory for neutrons emitted during solar flares. In particular, a strategy for placing confidence intervals on neutron energy distributions, due to uncertainties on the response matrix has been developed. We have also been able to demonstrate the superior performance of one of a range of possible statistical regularization strategies. A method of generating likely models of neutron energy distributions has also been developed as a tool to this end. The project involved solving an inverse problem with noise being added to the data in various ways. To achieve this pre-existing C code was used to run Fortran subroutines which performed statistical regularization on the data.

  14. Swinger RNAs with sharp switches between regular transcription and transcription systematically exchanging ribonucleotides: Case studies.

    PubMed

    Seligmann, Hervé

    2015-09-01

    During RNA transcription, DNA nucleotides A,C,G, T are usually matched by ribonucleotides A, C, G and U. However occasionally, this rule does not apply: transcript-DNA homologies are detectable only assuming systematic exchanges between ribonucleotides. Nine symmetric (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric (X ↔ Y ↔ Z, e.g. A ↔ C ↔ G) exchanges exist, called swinger transcriptions. Putatively, polymerases occasionally stabilize in unspecified swinger conformations, possibly similar to transient conformations causing punctual misinsertions. This predicts chimeric transcripts, part regular, part swinger-transformed, reflecting polymerases switching to swinger polymerization conformation(s). Four chimeric Genbank transcripts (three from human mitochondrion and one murine cytosolic) are described here: (a) the 5' and 3' extremities reflect regular polymerization, the intervening sequence exchanges systematically between ribonucleotides (swinger rule G ↔ U, transcript (1), with sharp switches between regular and swinger sequences; (b) the 5' half is 'normal', the 3' half systematically exchanges ribonucleotides (swinger rule C ↔ G, transcript (2), with an intercalated sequence lacking homology; (c) the 3' extremity fits A ↔ G exchanges (10% of transcript length), the 5' half follows regular transcription; the intervening region seems a mix of regular and A ↔ G transcriptions (transcript 3); (d) murine cytosolic transcript 4 switches to A ↔ U + C ↔ G, and is fused with A ↔ U + C ↔ G swinger transformed precursor rRNA. In (c), each concomitant transcript 5' and 3' extremities match opposite genome strands. Transcripts 3 and 4 combine transcript fusions with partial swinger transcriptions. Occasional (usually sharp) switches between regular and swinger transcriptions reveal greater coding potential than detected until now, suggest stable polymerase swinger conformations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Comparison of CFD simulations with experimental data for a tanker model advancing in waves

    NASA Astrophysics Data System (ADS)

    Orihara, Hideo

    2011-03-01

    In this paper, CFD simulation results for a tanker model are compared with experimental data over a range of wave conditions to verify a capability to predict the sea-keeping performance of practical hull forms. CFD simulations are conducted using WISDAM-X code which is capable of unsteady RANS calculations in arbitrary wave conditions. Comparisons are made of unsteady surface pressures, added resistance and ship motions in regular waves for cases of fully-loaded and ballast conditions of a large tanker model. It is shown that the simulation results agree fairly well with the experimental data, and that WISDAM-X code can predict sea-keeping performance of practical hull forms.

  16. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  17. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  18. A Systolic VLSI Design of a Pipeline Reed-solomon Decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1984-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  19. A VLSI design of a pipeline Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1985-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  20. Potential Uses of the Functional Account Code in Describing Job Requirements. Final Report for Period March 1974-June 1975.

    ERIC Educational Resources Information Center

    Wiley, Llewellyn N.

    A major problem in the utilization of personnel appears in the identification of skills and knowledges acquired in job assignments held in the past. Lack of regular job inventorying of Air Force personnel by individuals rather than samples makes it infeasible to use job inventories to recapture a given airman's record. The possibility of using the…

  1. SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jinfeng; Cao, Ruifen; Dai, Yumei

    Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less

  2. Sensory and physicochemical evaluation of low-fat chicken mortadella with added native and modified starches.

    PubMed

    Prestes, R C; Silva, L B; Torri, A M P; Kubota, E H; Rosa, C S; Roman, S S; Kempka, A P; Demiate, I M

    2015-07-01

    The objective of this work was to evaluate the effect of adding different starches (native and modified) on the physicochemical, sensory, structural and microbiological characteristics of low-fat chicken mortadella. Two formulations containing native cassava and regular corn starch, coded CASS (5.0 % of cassava starch) and CORN (5.0 % of regular corn starch), and one formulation produced with physically treated starch coded as MOD1 (2.5 % of Novation 2300) and chemically modified starch coded as MOD2 (2.5 % of Thermtex) were studied. The following tests were performed: physicochemical characterization (moisture, ash, protein, starch and lipid contents, and water activity); cooling, freezing and reheating losses; texture (texture profile test); color coordinates (L*, a*, b*, C and h); microbiological evaluation; sensory evaluation (multiple comparison and preference test); and histological evaluation (light microscopy). There was no significant difference (p > 0.05) for ash, protein, cooling loss, cohesiveness or in the preference test for the tested samples. The other evaluated parameters showed significant differences (p < 0.05). Histological study allowed for a qualitative evaluation between the physical properties of the food and its microscopic structure. The best results were obtained for formulation MOD2 (2.5 % Thermtex). The addition of modified starch resulted in a better performance than the native starch in relation to the evaluated technological parameters, mainly in relation to reheating losses, which demonstrated the good interaction between the modified starch in the structure of the product and the possibility of the application of this type of starch in other types of functional meat products.

  3. [Quality assurance in coding expertise of hospital cases in the German DRG system. Evaluation of inter-rater reliability in MDK expertise].

    PubMed

    Huber, H; Brambrink, M; Funk, R; Rieger, M

    2012-10-01

    The purpose of this study was to evaluate differences in the D-DRG results of a hospital case by 2 independently coding MKD raters. Calculation of the 2-inter-rater reliability was performed by examination of the coding of individual hospital cases. The reasons for the non-agreement of the expert evaluations and suggestions to improve the process are discussed. From the expert evaluation pool of the MDK-WL a random sample of 0.7% of the 57,375 expertises was taken. Distribution equality with the basic total was tested by the χ² test or, respectively, Fisher's exact test. For the total of 402 individual hospital cases, the G-DRG case sums of 2 experts of the MDK were determined independently and the results checked for each individual case for agreement or non-agreement. The corresponding confidence intervals with standard errors were analysed to test if certain major diagnosis categories (MDC) were statistically significantly more affected by differing expertise results than others. In 280 of the total 402 tested hospital cases, the 2 MDK raters independently reached the same G-DRG results; in 122 cases the G-DRG case sums determined by the 2 raters differed (agreement 70%; CI 65.2-74.1). Different DRG results between the 2 experts occurred regularly in the entire MDC spectrum. No MDC chapter in which significant differences between the 2 raters arose could be identified. The results of our study demonstrate an almost 70% agreement in the evaluation of hospital cost accounts by 2 independently operating MDK. This result leaves room for improvement. Optimisation potentials can be recognised on the basis of the results. Potential for improvement was established in combination with regular further training and the expansion of binding internal code recommendations as well as exchange of code-relevant information among experts in internal forums. The presented model is in principle suitable for cross-border examinations within the MDK system with the advantage that further trends could be uncovered by more variety and larger numbers of the randomly selected cases. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering.

    PubMed

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-10-24

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.

  5. Comparison of the thermal neutron scattering treatment in MCNP6 and GEANT4 codes

    NASA Astrophysics Data System (ADS)

    Tran, H. N.; Marchix, A.; Letourneau, A.; Darpentigny, J.; Menelle, A.; Ott, F.; Schwindling, J.; Chauvin, N.

    2018-06-01

    To ensure the reliability of simulation tools, verification and comparison should be made regularly. This paper describes the work performed in order to compare the neutron transport treatment in MCNP6.1 and GEANT4-10.3 in the thermal energy range. This work focuses on the thermal neutron scattering processes for several potential materials which would be involved in the neutron source designs of Compact Accelerator-based Neutrons Sources (CANS), such as beryllium metal, beryllium oxide, polyethylene, graphite, para-hydrogen, light water, heavy water, aluminium and iron. Both thermal scattering law and free gas model, coming from the evaluated data library ENDF/B-VII, were considered. It was observed that the GEANT4.10.03-patch2 version was not able to account properly the coherent elastic process occurring in crystal lattice. This bug is treated in this work and it should be included in the next release of the code. Cross section sampling and integral tests have been performed for both simulation codes showing a fair agreement between the two codes for most of the materials except for iron and aluminium.

  6. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    NASA Astrophysics Data System (ADS)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  7. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering

    PubMed Central

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-01-01

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064

  8. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  9. Towards an Integrated QR Code Biosensor: Light-Driven Sample Acquisition and Bacterial Cellulose Paper Substrate.

    PubMed

    Yuan, Mingquan; Jiang, Qisheng; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu

    2018-06-01

    This paper addresses two key challenges toward an integrated forward error-correcting biosensor based on our previously reported self-assembled quick-response (QR) code. The first challenge involves the choice of the paper substrate for printing and self-assembling the QR code. We have compared four different substrates that includes regular printing paper, Whatman filter paper, nitrocellulose membrane and lab synthesized bacterial cellulose. We report that out of the four substrates bacterial cellulose outperforms the others in terms of probe (gold nanorods) and ink retention capability. The second challenge involves remote activation of the analyte sampling and the QR code self-assembly process. In this paper, we use light as a trigger signal and a graphite layer as a light-absorbing material. The resulting change in temperature due to infrared absorption leads to a temperature gradient that then exerts a diffusive force driving the analyte toward the regions of self-assembly. The working principle has been verified in this paper using assembled biosensor prototypes where we demonstrate higher sample flow rate due to light induced thermal gradients.

  10. Self-Powered Forward Error-Correcting Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled Quick Response Codes.

    PubMed

    Yuan, Mingquan; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu

    2016-10-01

    This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors.

  11. Inclusion of Children with Autism Spectrum Disorders: Listening and Hearing to Voices from the Grassroots.

    PubMed

    Majoko, Tawanda

    2016-04-01

    The current significantly high prevalence rates of autism spectrum disorder (ASD) coupled with the paradigm shift from exclusive to inclusive education warrants research on inclusion of children with ASD in mainstream classrooms in Zimbabwe. A qualitative methodology was used to interview 21 regular primary school teachers regarding social barriers and enablers of inclusion of 6-12 year old children with ASD in mainstream classrooms in Harare educational province of Zimbabwe. Data analysis comprised pattern coding and cross-case analysis. Social rejection, communication impairments and behavioural challenges of children with ASD interfered with inclusion in mainstream classrooms. Regular teachers' training, stakeholder collaboration and institutionalization of social support services and programmes would facilitate the inclusion of children with ASD in mainstream classrooms.

  12. India: Chronology of Recent Events

    DTIC Science & Technology

    2007-02-13

    Order Code RS21589 Updated February 13, 2007 India : Chronology of Recent Events K. Alan Kronstadt Specialist in Asian Affairs Foreign Affairs...Defense, and Trade Division Summary This report provides a reverse chronology of recent events involving India and India -U.S. relations. Sources include... India -U.S. Relations. This report will be updated regularly. 02/13/07 — Commerce Secretary Gutierrez began a two-day visit to New Delhi, where he

  13. Visible Languages for Program Visualization

    DTIC Science & Technology

    1986-02-01

    Comments 38 The Presentation of Program Metadata 39 The Spatial Composition of Comments 41 The Typography of Punctuation 42 Typographic Encodings... Typography of Program Punctuation 6. In this example the "" appears in 10 point regular Helvetica type, and thus uses the same typographic parameters as...Results. Conclusions Chapter 4 Graphic Design of C Source Code and Comments Section 4 3 1 he Typography of Punctuation Page 41 l ft. Section

  14. Reprint Filing: A Profile-Based Solution

    PubMed Central

    Gass, David A.; Putnam, R. Wayne

    1983-01-01

    A reprint filing system based on practice profiles can give family physicians easy access to relevant medical information. The use of the ICHPPC classification and some supplemental categories provides a more practical coding mechanism than organ systems, textbook chapter titles or even Index Medicus subject headings. The system can be simply maintained, updated and improved, but users must regularly weed out unused information, and read widely to keep the reprints current. PMID:21283301

  15. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding

    PubMed Central

    Bottle, Alex; Jarman, Brian; Aylin, Paul

    2011-01-01

    Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587

  16. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  17. Genetically improved BarraCUDA.

    PubMed

    Langdon, W B; Lam, Brian Yee Hong

    2017-01-01

    BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.

  18. Solving differential equations for Feynman integrals by expansions near singular points

    NASA Astrophysics Data System (ADS)

    Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.

    2018-03-01

    We describe a strategy to solve differential equations for Feynman integrals by powers series expansions near singular points and to obtain high precision results for the corresponding master integrals. We consider Feynman integrals with two scales, i.e. non-trivially depending on one variable. The corresponding algorithm is oriented at situations where canonical form of the differential equations is impossible. We provide a computer code constructed with the help of our algorithm for a simple example of four-loop generalized sunset integrals with three equal non-zero masses and two zero masses. Our code gives values of the master integrals at any given point on the real axis with a required accuracy and a given order of expansion in the regularization parameter ɛ.

  19. Nada: A new code for studying self-gravitating tori around black holes

    NASA Astrophysics Data System (ADS)

    Montero, Pedro J.; Font, José A.; Shibata, Masaru

    2008-09-01

    We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.

  20. FPGA-accelerated algorithm for the regular expression matching system

    NASA Astrophysics Data System (ADS)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  1. [Attitudes towards the code of conduct for scientists among council members of the Japanese Society for Hygiene].

    PubMed

    Ikeda, Wakaha; Inaba, Yutaka; Yoshida, Katsumi; Takeshita, Tatsuya; Ogoshi, Kumiko; Okamoto, Kazushi

    2010-01-01

    The aim of this study was to clarify the attitudes towards the code of conduct for scientists among council members of the Japanese Society for Hygiene (JSH). We also aimed to collect information to be used as baseline data for future studies. From November to December 2007, 439 Council members of the Japanese Society for Hygiene completed a self-administered questionnaire. The valid response rate was 43.7% (n=192/439). The mean ages of the subjects were 56.2 years for males (n=171), and 53.0 years for females (n=19). Many council members were unfamiliar with the "Code of Conduct for Scientists" established by the Science Council of Japan, suggesting that most of the regular members were also unfamiliar with these guidelines. However, the high level of interest in the "Code of Conduct for Scientists" established by the Science Council of Japan indicated a positive attitude towards learning about research ethics. Moreover, one-half of the subjects responded that JSH should establish a code of conduct for scientists. Below are some of the reasons for requiring JSH to establish a code of conduct: 1. Private information is prevalent in the field of hygiene. 2. The overall stance as an academic society would be established and would encourage individuality in academic societies. 3. Members have various backgrounds within the field of hygiene, and they should have a code of conduct different from that of their institution of affiliation. We clarified attitudes towards the Code of Conduct for Scientists among council members of the Japanese Society for Hygiene.

  2. Standard terminology and labeling of ocular tissue for transplantation.

    PubMed

    Armitage, W John; Ashford, Paul; Crow, Barbara; Dahl, Patricia; DeMatteo, Jennifer; Distler, Pat; Gopinathan, Usha; Madden, Peter W; Mannis, Mark J; Moffatt, S Louise; Ponzin, Diego; Tan, Donald

    2013-06-01

    To develop an internationally agreed terminology for describing ocular tissue grafts to improve the accuracy and reliability of information transfer, to enhance tissue traceability, and to facilitate the gathering of comparative global activity data, including denominator data for use in biovigilance analyses. ICCBBA, the international standards organization for terminology, coding, and labeling of blood, cells, and tissues, approached the major Eye Bank Associations to form an expert advisory group. The group met by regular conference calls to develop a standard terminology, which was released for public consultation and amended accordingly. The terminology uses broad definitions (Classes) with modifying characteristics (Attributes) to define each ocular tissue product. The terminology may be used within the ISBT 128 system to label tissue products with standardized bar codes enabling the electronic capture of critical data in the collection, processing, and distribution of tissues. Guidance on coding and labeling has also been developed. The development of a standard terminology for ocular tissue marks an important step for improving traceability and reducing the risk of mistakes due to transcription errors. ISBT 128 computer codes have been assigned and may now be used to label ocular tissues. Eye banks are encouraged to adopt this standard terminology and move toward full implementation of ISBT 128 nomenclature, coding, and labeling.

  3. Reply on Comment on "High resolution coherence analysis between planetary and climate oscillations" by S. Holm

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2018-07-01

    Holm (ASR, 2018) claims that Scafetta (ASR 57, 2121-2135, 2016) is "irreproducible" because I would have left "undocumented" the values of two parameters (a reduced-rank index p and a regularization term δ) that he claimed to be requested in the Magnitude Squared Coherence Canonical Correlation Analysis (MSC-CCA). Yet, my analysis did not require such two parameters. In fact: (1) using the MSC-CCA reduced-rank option neither changes the result nor was needed since Scafetta (2016) statistically evaluated the significance of the coherence spectral peaks; (2) the analysis algorithm neither contains nor needed the regularization term δ . Herein I show that Holm could not replicate Scafetta (2016) because he used different analysis algorithms. In fact, although Holm claimed to be using MSC-CCA, for his Figs. 2-4 he used a MatLab code labeled "gcs_cca_1D.m" (see paragraph 2 of his Section 3), which Holm also modified, that implements a different methodology known as the Generalized Coherence Spectrum using the Canonical Correlation Analysis (GCS-CCA). This code is herein demonstrated to be unreliable under specific statistical circumstances such as those required to replicate Scafetta (2016). On the contrary, the MSC-CCA method is stable and reliable. Moreover, Holm could not replicate my result also in his Fig. 5 because there he used the basic Welch MSC algorithm by erroneously equating it to MSC-CCA. Herein I clarify step-by-step how to proceed with the correct analysis, and I fully confirm the 95% significance of my results. I add data and codes to easily replicate my results.

  4. Spinal cord injuries in Australian footballers.

    PubMed

    2003-07-01

    Acute spinal cord injury is a serious concern in football, particularly the rugby codes. This Australia-wide study covers the years 1986-1996 and data are compared with those from a previous identical study for 1960-1985. A retrospective review of 80 players with a documented acute spinal cord injury admitted to the six spinal cord injury units in Australia. Personal interview was carried out in 85% of the participants to determine the injury circumstances and the level of compensation. The severity of the neurological deficit and the functional recovery were determined (Frankel grade). The annual incidence of injuries for all codes combined did not change over the study period, but there was some decrease in rugby union and an increase in rugby league. In particular there was a significant decline in the incidence of adult rugby union injuries (P = 0.048). Scrum injuries in union have decreased subsequent to law changes in 1985, particularly in schoolboys, although ruck and maul injuries are increasing; 39% of scrum injuries occurred in players not in their regular position. Tackles were the most common cause of injury in league, with two-on-one tackles accounting for nearly half of these. Schoolboy injuries tended to mirror those in adults, but with a lower incidence. Over half of the players remain wheelchair-dependent, and 10% returned to near-normality. Six players (7.5%) died as a result of their injuries. The rugby codes must be made safer by appropriate preventative strategies and law changes. In particular, attention is necessary for tackle injuries in rugby league and players out of regular position in scrummage. Compensation for injured players is grossly inadequate. There is an urgent need to establish a national registry to analyse these injuries prospectively.

  5. Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW

    NASA Astrophysics Data System (ADS)

    Schaars, F.; Bakker, M.

    2012-12-01

    We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.

  6. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  7. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  8. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, Jasmina L.

    1993-01-01

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values.

  9. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, J.L.

    1993-11-30

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values. 28 figures.

  10. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  11. Expanding capacity and promoting inclusion in introductory computer science: a focus on near-peer mentor preparation and code review

    NASA Astrophysics Data System (ADS)

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on regular, consistent feedback via peer code review and inclusive pedagogy. Introductory computer science students provided consistently high ratings of the peer mentors' knowledge, approachability, and flexibility, and credited peer mentor meetings for their strengthened self-efficacy and understanding. Peer mentors noted the value of videotaped simulations with reflection, discussions of inclusion, and the cohort's weekly practicum for improving practice. Adaptations of peer mentoring for different types of institutions are discussed. Computer science educators, with hopes of improving the recruitment and retention of underrepresented groups, can benefit from expanding their peer support infrastructure and improving the quality of peer mentor preparation.

  12. A three-dimensional viscous/potential flow interaction analysis method for multi-element wings: Modifications to the potential flow code to allow part-span, high-lift devices and close-interference calculations

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1979-01-01

    The description of the modified code includes details of a doublet subpanel technique in which panels that are close to a velocity calculation point are replaced by a subpanel set. This treatment gives the effect of a higher panel density without increasing the number of unknowns. In particular, the technique removes the close approach problem of the earlier singularity model in which distortions occur in the detailed pressure calculation near panel corners. Removal of this problem allowed a complete wake relaxation and roll-up iterative procedure to be installed in the code. The geometry package developed for the new technique and also for the more general configurations is based on a multiple patch scheme. Each patch has a regular array of panels, but arbitrary relationships are allowed between neighboring panels at the edges of adjacent patches. This provides great versatility for treating general configurations.

  13. Investigation of photon attenuation coefficient of some building materials used in Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, B.; Altinsoy, N.

    In this study, some building materials regularly used in Turkey, such as concrete, gas concrete, pumice and brick have been investigated in terms of mass attenuation coefficient at different gamma-ray energies. Measurements were carried out by gamma spectrometry containing NaI(Tl) detector. Narrow beam gamma-ray transmission geometry was used for the attenuation measurements. The results are in good agreement with the theoretical calculation of XCOM code.

  14. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  15. Mixed Methodology to Predict Social Meaning for Decision Support

    DTIC Science & Technology

    2013-09-01

    regular usage of Standard American English (SAE) that also ranges in use of stylistic features that identify users as members of certain street gangs...membership based solely on their use of language. While aspects of gang language, such as the stylistic tendencies of the language of graffiti (Adams and... stylistics of gang language online, as a mode of code switching that reflects the infrastructure of the larger gang community, has been little studied

  16. Regular group exercise contributes to balanced health in older adults in Japan: a qualitative study.

    PubMed

    Komatsu, Hiroko; Yagasaki, Kaori; Saito, Yoshinobu; Oguma, Yuko

    2017-08-22

    While community-wide interventions to promote physical activity have been encouraged in older adults, evidence of their effectiveness remains limited. We conducted a qualitative study among older adults participating in regular group exercise to understand their perceptions of the physical, mental, and social changes they underwent as a result of the physical activity. We conducted a qualitative study with purposeful sampling to explore the experiences of older adults who participated in regular group exercise as part of a community-wide physical activity intervention. Four focus group interviews were conducted between April and June of 2016 at community halls in Fujisawa City. The participants in the focus group interviews were 26 older adults with a mean age of 74.69 years (range: 66-86). The interviews were analysed using the constant comparative method in the grounded theory approach. We used qualitative research software NVivo10® to track the coding and manage the data. The finding 'regular group exercise contributes to balanced health in older adults' emerged as an overarching theme with seven categories (regular group exercise, functional health, active mind, enjoyment, social connectedness, mutual support, and expanding communities). Although the participants perceived that they were aging physically and cognitively, the regular group exercise helped them to improve or maintain their functional health and enjoy their lives. They felt socially connected and experienced a sense of security in the community through caring for others and supporting each other. As the older adults began to seek value beyond individuals, they gradually expanded their communities beyond geographical and generational boundaries. The participants achieved balanced health in the physical, mental, and social domains through regular group exercise as part of a community-wide physical activity intervention and contributed to expanding communities through social connectedness and mutual support. Health promotion through physical activity is being increasingly emphasized. The study results can help to develop effective physical activity programs for older adults in the community.

  17. Suntans and sun protection in Australian teen media: 1999 to 2000.

    PubMed

    McDermott, Liane J; Lowe, John B; Stanton, Warren R; Clavarino, Alexandra M

    2005-08-01

    In this study, the portrayal of tanned skin and sun protection in magazines, television programs, and movies popular with Australian adolescents were analyzed. Images of models in magazines (n = 1,791), regular/supporting characters in television programs (n = 867), and regular/supporting characters in cinema movies (n = 2,836)for the 12-month period August 1999 to July 2000 were coded and analyzed. A light tan was the most predominant tan level, and protective clothing was the most common sun protection measure displayed across all forms of media. There were significant associations between gender and tan levels in the television and movie samples. Although it is important to monitor the portrayal of tan levels and sun protection measures in media targeting adolescents, overall, the authors' findings revealed a media environment generally supportive of sun protection objectives.

  18. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  19. THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1984-07-01

    The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less

  20. Accurate orbit propagation in the presence of planetary close encounters

    NASA Astrophysics Data System (ADS)

    Amato, Davide; Baù, Giulio; Bombardelli, Claudio

    2017-09-01

    We present an efficient strategy for the numerical propagation of small Solar system objects undergoing close encounters with massive bodies. The trajectory is split into several phases, each of them being the solution of a perturbed two-body problem. Formulations regularized with respect to different primaries are employed in two subsequent phases. In particular, we consider the Kustaanheimo-Stiefel regularization and a novel set of non-singular orbital elements pertaining to the Dromo family. In order to test the proposed strategy, we perform ensemble propagations in the Earth-Sun Circular Restricted 3-Body Problem (CR3BP) using a variable step size and order multistep integrator and an improved version of Everhart's radau solver of 15th order. By combining the trajectory splitting with regularized equations of motion in short-term propagations (1 year), we gain up to six orders of magnitude in accuracy with respect to the classical Cowell's method for the same computational cost. Moreover, in the propagation of asteroid (99942) Apophis through its 2029 Earth encounter, the position error stays within 100 metres after 100 years. In general, as to improve the performance of regularized formulations, the trajectory must be split between 1.2 and 3 Hill radii from the Earth. We also devise a robust iterative algorithm to stop the integration of regularized equations of motion at a prescribed physical time. The results rigorously hold in the CR3BP, and similar considerations may apply when considering more complex models. The methods and algorithms are implemented in the naples fortran 2003 code, which is available online as a GitHub repository.

  1. Three-dimensional time-dependent STAR reactor kinetics analyses coupled with RETRAN and MCPWR system response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1989-11-01

    The operation of a nuclear power plant must be regularly supported by various reactor dynamics and thermal-hydraulic analyses, which may include final safety analysis report (FSAR) design-basis calculations, and conservative and best-estimate analyses. The development and improvement of computer codes and analysis methodologies provide many advantages, including the ability to evaluate the effect of modeling simplifications and assumptions made in previous reactor kinetics and thermal-hydraulic calculations. This paper describes the results of using the RETRAN, MCPWR, and STAR codes in a tandem, predictive-corrective manner for three pressurized water reactor (PWR) transients: (a) loss of feedwater (LOF) anticipated transient without scrammore » (ATWS), (b) station blackout ATWS, and (c) loss of total reactor coolant system (RCS) flow with a scram.« less

  2. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  3. Toward heterogeneity in feedforward network with synaptic delays based on FitzHugh-Nagumo model

    NASA Astrophysics Data System (ADS)

    Qin, Ying-Mei; Men, Cong; Zhao, Jia; Han, Chun-Xiao; Che, Yan-Qiu

    2018-01-01

    We focus on the role of heterogeneity on the propagation of firing patterns in feedforward network (FFN). Effects of heterogeneities both in parameters of neuronal excitability and synaptic delays are investigated systematically. Neuronal heterogeneity is found to modulate firing rates and spiking regularity by changing the excitability of the network. Synaptic delays are strongly related with desynchronized and synchronized firing patterns of the FFN, which indicate that synaptic delays may play a significant role in bridging rate coding and temporal coding. Furthermore, quasi-coherence resonance (quasi-CR) phenomenon is observed in the parameter domain of connection probability and delay-heterogeneity. All these phenomena above enable a detailed characterization of neuronal heterogeneity in FFN, which may play an indispensable role in reproducing the important properties of in vivo experiments.

  4. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data

    PubMed Central

    2011-01-01

    Background Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Methods Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. Results For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Conclusions Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system. PMID:21849089

  5. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    PubMed

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  6. Objective speech quality assessment and the RPE-LTP coding algorithm in different noise and language conditions.

    PubMed

    Hansen, J H; Nandkumar, S

    1995-01-01

    The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.

  7. Using the ALEGRA Code for Analysis of Quasi-Static Magnetization of Metals

    DTIC Science & Technology

    2015-09-01

    covariant Levi - Civita skew-symmetric tensor. Using tensorial notation per- mits one to present all the equations in the universal covariant (i.e., coordinate...tensors numerically coincide with the corresponding values of the Kronnekker symbol δij, δij, δij. The Levi - Civita tensor z ijk has the main com...simulations: body -fitted (left) and regular (right). 6.1 Spatial Discretization Two mesh configurations were used: (1) a body -fitted irregular mesh

  8. Status Report on Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for its Investigation, and Practical Applications, April 1-September 30, 1983.

    ERIC Educational Resources Information Center

    Studdert-Kennedy, Michael, Ed.; O'Brien, Nancy, Ed.

    Prepared as part of a regular series on the status and progress of studies on the nature of speech, instrumentation for its evaluation, and practical applications for speech research, this compilation contains 14 reports. Topics covered in the reports include the following: (1) phonetic coding and order memory in relation to reading proficiency,…

  9. Unsteady Flow About Porous Cambered Plates

    DTIC Science & Technology

    1988-06-01

    regular time intervals, and evolution of the vortex wake is calculated through the use of the velocities induced at each vortex location. Furthermore... Vorte Poiin o r C 22. at-1.54 o -. 38 . °°" . * ° 2 .- * *o C,, * .* I l * 0••.. . • .• 9• . " 0 - - .-. - - 9 Figure 24. Wake Vortex Positions for...Codes 18 Subject Terms (continue on reverse if necessary and identify by block number) Field Group Subgroup Unsteady Flow, Discrete Vortex Analysis

  10. Multinational Counter-Piracy Operations: How Strategically Significant is the Gulf of Guinea to the Major Maritime Powers

    DTIC Science & Technology

    2015-12-01

    DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Piracy in the Gulf of Guinea regularly exceeded that of the Gulf of Aden between 2000 and 2007. But...flow of goods is the flow of services, which in today’s computer-centric world travels electronically in digital bits and bytes through fiber optic...piracy prosecutions, among others. Second order costs include fisheries, food security and food price inflation, tourism , and environmental pollution

  11. Economic Concentration and the Federal Tax Code,

    DTIC Science & Technology

    1984-09-01

    Special Analysis G. 0 ...-..... . . . .~....... 677 777 ".47- śf . -2- Retained Earnings: The divergence of the individual from the corporate income tax rate...up to a 38.5 percent tax on S retained earnings. After paying corporate income tax on their income, firms may distribute their earnings to shareholders...months) over net short-term capital losses. They are taxed at the regular corporate income tax rate on the excess of net short-term capital gains over

  12. Accelerating NBODY6 with graphics processing units

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Aarseth, Sverre J.

    2012-07-01

    We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.

  13. The reliability of cause-of-death coding in The Netherlands.

    PubMed

    Harteloh, Peter; de Bruin, Kim; Kardaun, Jan

    2010-08-01

    Cause-of-death statistics are a major source of information for epidemiological research or policy decisions. Information on the reliability of these statistics is important for interpreting trends in time or differences between populations. Variations in coding the underlying cause of death could hinder the attribution of observed differences to determinants of health. Therefore we studied the reliability of cause-of-death statistics in The Netherlands. We performed a double coding study. Death certificates from the month of May 2005 were coded again in 2007. Each death certificate was coded manually by four coders. Reliability was measured by calculating agreement between coders (intercoder agreement) and by calculating the consistency of each individual coder in time (intracoder agreement). Our analysis covered an amount of 10,833 death certificates. The intercoder agreement of four coders on the underlying cause of death was 78%. In 2.2% of the cases coders agreed on a change of the code assigned in 2005. The (mean) intracoder agreement of four coders was 89%. Agreement was associated with the specificity of the ICD-10 code (chapter, three digits, four digits), the age of the deceased, the number of coders and the number of diseases reported on the death certificate. The reliability of cause-of-death statistics turned out to be high (>90%) for major causes of death such as cancers and acute myocardial infarction. For chronic diseases, such as diabetes and renal insufficiency, reliability was low (<70%). The reliability of cause-of-death statistics varies by ICD-10 code/chapter. A statistical office should provide coders with (additional) rules for coding diseases with a low reliability and evaluate these rules regularly. Users of cause-of-death statistics should exercise caution when interpreting causes of death with a low reliability. Studies of reliability should take into account the number of coders involved and the number of codes on a death certificate.

  14. Equality marker in the language of bali

    NASA Astrophysics Data System (ADS)

    Wajdi, Majid; Subiyanto, Paulus

    2018-01-01

    The language of Bali could be grouped into one of the most elaborate languages of the world since the existence of its speech levels, low and high speech levels, as the language of Java has. Low and high speech levels of the language of Bali are language codes that could be used to show and express social relationship between or among its speakers. This paper focuses on describing, analyzing, and interpreting the use of the low code of the language of Bali in daily communication in the speech community of Pegayaman, Bali. Observational and documentation methods were applied to provide the data for the research. Recoding and field note techniques were executed to provide the data. Recorded in spoken language and the study of novel of Balinese were transcribed into written form to ease the process of analysis. Symmetric use of low code expresses social equality between or among the participants involves in the communication. It also implies social intimacy between or among the speakers of the language of Bali. Regular and patterned use of the low code of the language of Bali is not merely communication strategy, but it is a kind of communication agreement or communication contract between the participants. By using low code during their social and communication activities, the participants shared and express their social equality and intimacy between or among the participants involve in social and communication activities.

  15. Revision of seismic design codes corresponding to building damages in the ``5.12'' Wenchuan earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Yayong

    2010-06-01

    A large number of buildings were seriously damaged or collapsed in the “5.12” Wenchuan earthquake. Based on field surveys and studies of damage to different types of buildings, seismic design codes have been updated. This paper briefly summarizes some of the major revisions that have been incorporated into the “Standard for classification of seismic protection of building constructions GB50223-2008” and “Code for Seismic Design of Buildings GB50011-2001.” The definition of seismic fortification class for buildings has been revisited, and as a result, the seismic classifications for schools, hospitals and other buildings that hold large populations such as evacuation shelters and information centers have been upgraded in the GB50223-2008 Code. The main aspects of the revised GB50011-2001 code include: (a) modification of the seismic intensity specified for the Provinces of Sichuan, Shanxi and Gansu; (b) basic conceptual design for retaining walls and building foundations in mountainous areas; (c) regularity of building configuration; (d) integration of masonry structures and pre-cast RC floors; (e) requirements for calculating and detailing stair shafts; and (f) limiting the use of single-bay RC frame structures. Some significant examples of damage in the epicenter areas are provided as a reference in the discussion on the consequences of collapse, the importance of duplicate structural systems, and the integration of RC and masonry structures.

  16. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  17. Entanglement and area law with a fractal boundary in a topologically ordered phase

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Lidar, Daniel A.; Severini, Simone

    2010-01-01

    Quantum systems with short-range interactions are known to respect an area law for the entanglement entropy: The von Neumann entropy S associated to a bipartition scales with the boundary p between the two parts. Here we study the case in which the boundary is a fractal. We consider the topologically ordered phase of the toric code with a magnetic field. When the field vanishes it is possible to analytically compute the entanglement entropy for both regular and fractal bipartitions (A,B) of the system and this yields an upper bound for the entire topological phase. When the A-B boundary is regular we have S/p=1 for large p. When the boundary is a fractal of the Hausdorff dimension D, we show that the entanglement between the two parts scales as S/p=γ⩽1/D, and γ depends on the fractal considered.

  18. Extending HPF for advanced data parallel applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1994-01-01

    The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advanced applications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.

  19. A gradient enhanced plasticity-damage microplane model for concrete

    NASA Astrophysics Data System (ADS)

    Zreid, Imadeddin; Kaliske, Michael

    2018-03-01

    Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.

  20. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  1. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  2. Pediatric office emergencies.

    PubMed

    Fuchs, Susan

    2013-10-01

    Pediatricians regularly see emergencies in the office, or children that require transfer to an emergency department, or hospitalization. An office self-assessment is the first step in determining how to prepare for an emergency. The use of mock codes and skill drills make office personnel feel less anxious about medical emergencies. Emergency information forms provide valuable, quick information about complex patients for emergency medical services and other physicians caring for patients. Furthermore, disaster planning should be part of an office preparedness plan. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  4. Geoethics and the Role of Professional Geoscience Societies

    NASA Astrophysics Data System (ADS)

    Kieffer, S. W.; Palka, J. M.; Geissman, J. W.; Mogk, D. W.

    2014-12-01

    Codes of Ethics (Conduct) for geoscientists are formulated primarily by professional societies and the codes must be viewed in the context of the Goals (Missions, Values) of the societies. Our survey of the codes of approximately twenty-five societies reveals that most codes enumerate principles centered on practical issues regarding professional conduct of individuals such as plagiarism, fabrication, and falsification, and the obligation of individuals to the profession and society at large. With the exception of statements regarding the ethics of peer review, there is relatively little regarding the ethical obligations of the societies themselves. In essence, the codes call for traditionally honorable behavior of individual members. It is striking, given that the geosciences are largely relevant to the future of Earth, most current codes of societies fail to address our immediate obligations to the environment and Earth itself. We challenge professional organizations to consider the ethical obligations to Earth in both their statements of goals and in their codes of ethics. Actions by societies could enhance the efforts of individual geoscientists to serve society, especially in matters related to hazards, resources and planetary stewardship. Actions we suggest to be considered include: (1) Issue timely position statements on topics in which there is expertise and consensus (some professional societies such as AGU, GSA, AAAS, and the AMS, do this regularly, yet others not at all.); (2) Build databases of case studies regarding geoethics that can be used in university classes; (3) Hold interdisciplinary panel discussions with ethicists, scientists, and policy makers at annual meetings; (4) Foster publication in society journals of contributions relating to ethical questions; and (5) Aggressively pursue the incorporation of geoethical issues in undergraduate and graduate curricula and in continuing professional development.

  5. SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.

    2018-03-01

    We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.

  6. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses.

    PubMed

    Kumar, Manoj; Vijayakumar, A; Rosen, Joseph

    2017-09-14

    We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.

  7. The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert

    2007-01-01

    CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.

  8. Periodontal status and its association with self-reported hypertension in non-medical staff in a university teaching hospital in Nigeria.

    PubMed

    Umeizudike, K A; Ayanbadejo, P O; Onajole, A T; Umeizudike, T I; Alade, G O

    2016-03-01

    A growing body of evidence suggests a relationship between periodontal disease and non-communicable systemic diseases with rising prevalence in developing countries, Nigeria inclusive. To determine the periodontal status and its association with self-reported hypertension among non-medical staff in a university teaching hospital in Nigeria. A cross-sectional study was conducted among non-medical staff using self-administered questionnaires and periodontal clinical examination between July and August 2013. Multivariate analysis was explored to determine the independent variables associated with self-reported hypertension. P values < 0.05 were considered statistically significant. A total of 276 subjects were enrolled into the study. Shallow pockets (CPI code 3) constituted the predominant periodontal disease (46.7%), calculus (CPI code 2) 46%, bleeding gingiva (CPI code 1) in 3.3% and deep pockets ≥ 6mm (CPI code 4) in 2.2%. Self-reported hypertension was the most prevalent self-reported medical condition (18.1%) and found to be associated with periodontitis, increasing age, lower education, and a positive family history of hypertension. Periodontal disease was highly prevalent in this study. Self-reported hypertension was associated with periodontitis, older age, lower education and a positive family history. Periodic periodontal examination and regular blood pressure assessment for non-medical staff is recommended.

  9. Evaluation of Cueing Innovation for Pressure Ulcer Prevention Using Staff Focus Groups.

    PubMed

    Yap, Tracey L; Kennerly, Susan; Corazzini, Kirsten; Porter, Kristie; Toles, Mark; Anderson, Ruth A

    2014-07-25

    The purpose of the manuscript is to describe long-term care (LTC) staff perceptions of a music cueing intervention designed to improve staff integration of pressure ulcer (PrU) prevention guidelines regarding consistent and regular movement of LTC residents a minimum of every two hours. The Diffusion of Innovation (DOI) model guided staff interviews about their perceptions of the intervention's characteristics, outcomes, and sustainability. This was a qualitative, observational study of staff perceptions of the PrU prevention intervention conducted in Midwestern U.S. LTC facilities (N = 45 staff members). One focus group was held in each of eight intervention facilities using a semi-structured interview protocol. Transcripts were analyzed using thematic content analysis, and summaries for each category were compared across groups. The a priori codes (observability, trialability, compatibility, relative advantage and complexity) described the innovation characteristics, and the sixth code, sustainability, was identified in the data. Within each code, two themes emerged as a positive or negative response regarding characteristics of the innovation. Moreover, within the sustainability code, a third theme emerged that was labeled "brainstormed ideas", focusing on strategies for improving the innovation. Cueing LTC staff using music offers a sustainable potential to improve PrU prevention practices, to increase resident movement, which can subsequently lead to a reduction in PrUs.

  10. Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin

    2005-02-01

    This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.

  11. Comparative sequence analysis of acid sensitive/resistance proteins in Escherichia coli and Shigella flexneri

    PubMed Central

    Manikandan, Selvaraj; Balaji, Seetharaaman; Kumar, Anil; Kumar, Rita

    2007-01-01

    The molecular basis for the survival of bacteria under extreme conditions in which growth is inhibited is a question of great current interest. A preliminary study was carried out to determine residue pattern conservation among the antiporters of enteric bacteria, responsible for extreme acid sensitivity especially in Escherichia coli and Shigella flexneri. Here we found the molecular evidence that proved the relationship between E. coli and S. flexneri. Multiple sequence alignment of the gadC coded acid sensitive antiporter showed many conserved residue patterns at regular intervals at the N-terminal region. It was observed that as the alignment approaches towards the C-terminal, the number of conserved residues decreases, indicating that the N-terminal region of this protein has much active role when compared to the carboxyl terminal. The motif, FHLVFFLLLGG, is well conserved within the entire gadC coded protein at the amino terminal. The motif is also partially conserved among other antiporters (which are not coded by gadC) but involved in acid sensitive/resistance mechanism. Phylogenetic cluster analysis proves the relationship of Escherichia coli and Shigella flexneri. The gadC coded proteins are converged as a clade and diverged from other antiporters belongs to the amino acid-polyamine-organocation (APC) superfamily. PMID:21670792

  12. Recent developments in multidimensional transport methods for the APOLLO 2 lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmijarevic, I.; Sanchez, R.

    1995-12-31

    A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less

  13. Cyber-T web server: differential analysis of high-throughput data.

    PubMed

    Kayala, Matthew A; Baldi, Pierre

    2012-07-01

    The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.

  14. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    PubMed

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  15. Team communications in the operating room: talk patterns, sites of tension, and implications for novices.

    PubMed

    Lingard, Lorelei; Reznick, Richard; Espin, Sherry; Regehr, Glenn; DeVito, Isabella

    2002-03-01

    Although the communication that occurs within health care teams is important to both team function and the socialization of novices, the nature of team communication and its educational influence are not well documented. This study explored the nature of communications among operating room (OR) team members from surgery, nursing, and anesthesia to identify common communicative patterns, sites of tension, and their impact on novices. Paired researchers observed 128 hours of OR interactions during 35 procedures from four surgical divisions at one teaching hospital. Brief, unstructured interviews were conducted following each observation. Field notes were independently read by each researcher and coded for emergent themes in the grounded theory tradition. Coding consensus was achieved via regular discussion. Findings were returned to insider "experts" for their assessment of authenticity and adequacy. Patterns of communication were complex and socially motivated. Dominant themes were time, safety and sterility, resources, roles, and situation. Communicative tension arose regularly in relation to these themes. Each procedure had one to four "higher-tension" events, which often had a ripple effect, spreading tension to other participants and contexts. Surgical trainees responded to tension by withdrawing from the communication or mimicking the senior staff surgeon. Both responses had negative implications for their own team relations. Team communications in the OR follow observable patterns and are influenced by recurrent themes that suggest sites of team tension. Tension in team communication affects novices, who respond with behaviors that may intensify rather than resolve interprofessional conflict.

  16. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  17. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  18. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  19. Fostering Team Awareness in Earth System Modeling Communities

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Lawson, A.; Strong, S.

    2009-12-01

    Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.

  20. Violations of the International Code of Marketing of Breast-milk Substitutes: Indonesia context.

    PubMed

    Hidayana, Irma; Februhartanty, Judhiastuty; Parady, Vida A

    2017-01-01

    To measure compliance with the International Code of Marketing of Breast-milk Substitutes ('the Code') in Indonesia. The study was a cross-sectional survey using the Interagency Group on Breastfeeding Monitoring protocol. Public and private health facilities in six provinces on Java island in Indonesia. A total of 874 women (382 pregnant women and 492 breast-feeding mothers of infants below 6 months) and seventy-seven health workers were recruited from eighteen participating health facilities. The study also analysed a total of forty-four labels of breast-milk substitute products, twenty-seven television commercials for growing-up milk (for children >12 months) of nine brands and thirty-four print advertisements of fourteen brands. The study found that 20 % of the women had received advice and information on the use of breast-milk substitutes and 72 % had seen promotional materials for breast-milk substitutes. About 15 % reported receiving free samples and 16 % received gifts. Nearly a quarter of the health workers confirmed receiving visits from representatives of breast-milk substitute companies. Two health workers reported having received gifts from the companies. The most common labelling violations found were statements or visuals that discouraged breast-feeding and the absence of mention about the consideration of local climate in the expiration date. Violations of the Code by health workers, breast-milk substitute companies and their representatives were found in all provinces studied. A regular monitoring system should be in place to ensure improved compliance with and enforcement of the Code.

  1. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  2. NASA Tech Briefs, October 2008

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Topics covered include: Control Architecture for Robotic Agent Command and Sensing; Algorithm for Wavefront Sensing Using an Extended Scene; CO2 Sensors Based on Nanocrystalline SnO2 Doped with CuO; Improved Airborne System for Sensing Wildfires; VHF Wide-Band, Dual-Polarization Microstrip-Patch Antenna; Onboard Data Processor for Change-Detection Radar Imaging; Using LDPC Code Constraints to Aid Recovery of Symbol Timing; System for Measuring Flexing of a Large Spaceborne Structure; Integrated Formation Optical Communication and Estimation System; Making Superconducting Welds between Superconducting Wires; Method for Thermal Spraying of Coatings Using Resonant-Pulsed Combustion; Coating Reduces Ice Adhesion; Hybrid Multifoil Aerogel Thermal Insulation; SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software; Mars Image Collection Mosaic Builder; Providing Internet Access to High-Resolution Mars Images; Providing Internet Access to High-Resolution Lunar Images; Expressions Module for the Satellite Orbit Analysis Program Virtual Satellite; Small-Body Extensions for the Satellite Orbit Analysis Program (SOAP); Scripting Module for the Satellite Orbit Analysis Program (SOAP); XML-Based SHINE Knowledge Base Interchange Language; Core Technical Capability Laboratory Management System; MRO SOW Daily Script; Tool for Inspecting Alignment of Twinaxial Connectors; An ATP System for Deep-Space Optical Communication; Polar Traverse Rover Instrument; Expert System Control of Plant Growth in an Enclosed Space; Detecting Phycocyanin-Pigmented Microbes in Reflected Light; DMAC and NMP as Electrolyte Additives for Li-Ion Cells; Mass Spectrometer Containing Multiple Fixed Collectors; Waveguide Harmonic Generator for the SIM; Whispering Gallery Mode Resonator with Orthogonally Reconfigurable Filter Function; Stable Calibration of Raman Lidar Water-Vapor Measurements; Bimaterial Thermal Compensators for WGM Resonators; Root Source Analysis/ValuStream[Trade Mark] - A Methodology for Identifying and Managing Risks; Ensemble: an Architecture for Mission-Operations Software; Object Recognition Using Feature-and Color-Based Methods; On-Orbit Multi-Field Wavefront Control with a Kalman Filter; and The Interplanetary Overlay Networking Protocol Accelerator.

  3. NASA Tech Briefs, December 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics include: A Deep Space Network Portable Radio Science Receiver; Detecting Phase Boundaries in Hard-Sphere Suspensions; Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery; Very-Long-Distance Remote Hearing and Vibrometry; Using GPS to Detect Imminent Tsunamis; Stream Flow Prediction by Remote Sensing and Genetic Programming; Pilotless Frame Synchronization Using LDPC Code Constraints; Radiometer on a Chip; Measuring Luminescence Lifetime With Help of a DSP; Modulation Based on Probability Density Functions; Ku Telemetry Modulator for Suborbital Vehicles; Photonic Links for High-Performance Arraying of Antennas; Reconfigurable, Bi-Directional Flexfet Level Shifter for Low-Power, Rad-Hard Integration; Hardware-Efficient Monitoring of I/O Signals; Video System for Viewing From a Remote or Windowless Cockpit; Spacesuit Data Display and Management System; IEEE 1394 Hub With Fault Containment; Compact, Miniature MMIC Receiver Modules for an MMIC Array Spectrograph; Waveguide Transition for Submillimeter-Wave MMICs; Magnetic-Field-Tunable Superconducting Rectifier; Bonded Invar Clip Removal Using Foil Heaters; Fabricating Radial Groove Gratings Using Projection Photolithography; Gratings Fabricated on Flat Surfaces and Reproduced on Non-Flat Substrates; Method for Measuring the Volume-Scattering Function of Water; Method of Heating a Foam-Based Catalyst Bed; Small Deflection Energy Analyzer for Energy and Angular Distributions; Polymeric Bladder for Storing Liquid Oxygen; Pyrotechnic Simulator/Stray-Voltage Detector; Inventions Utilizing Microfluidics and Colloidal Particles; RuO2 Thermometer for Ultra-Low Temperatures; Ultra-Compact, High-Resolution LADAR System for 3D Imaging; Dual-Channel Multi-Purpose Telescope; Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting; CMOS Camera Array With Onboard Memory; Quickly Approximating the Distance Between Two Objects; Processing Images of Craters for Spacecraft Navigation; Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System; Rover Slip Validation and Prediction Algorithm; Safety and Quality Training Simulator; Supply-Chain Optimization Template; Algorithm for Computing Particle/Surface Interactions; Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images; and Thermal Transport Model for Heat Sink Design.

  4. Corporations now included under Section 189

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arlinghaus, B.P.; Anderson, D.T.

    1983-12-01

    This article examines some of the issues, including the ''real property'' question, that corporations may encounter in implementing the provisions of Code Section 189 and its regulations. The extension of 189 to regular corporations represents a significant change in congressional intent, since it was originally enacted as a reform measure and is now primarily a provision to raise revenue at a time when Congress is facing a large dificit. Code Section 189 was conceived and enacted in haste, however, and this expansion will undoubtedly have an adverse impact on capital investment at a time when stimulation is needed for themore » economy as a whole. The workload of the courts and the Internal Revenue Service will certainly increase. Careful drafting of the regulations could anticipate potential issues and clarify them in the drafting stage. The meaning of real property, capitalization rate, capitalization period, and self-constructed assets all need to be carefully addressed. 18 references.« less

  5. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  6. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants andmore » discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs.« less

  7. Casemix Funding Optimisation: Working Together to Make the Most of Every Episode.

    PubMed

    Uzkuraitis, Carly; Hastings, Karen; Torney, Belinda

    2010-10-01

    Eastern Health, a large public Victorian Healthcare network, conducted a WIES optimisation audit across the casemix-funded sites for separations in the 2009/2010 financial year. The audit was conducted using existing staff resources and resulted in a significant increase in casemix funding at a minimal cost. The audit showcased the skill set of existing staff and resulted in enormous benefits to the coding and casemix team by demonstrating the value of the combination of skills that makes clinical coders unique. The development of an internal web-based application allowed accurate and timely reporting of the audit results, providing the basis for a restructure of the coding and casemix service, along with approval for additional staffing resources and inclusion of a regular auditing program to focus on the creation of high quality data for research, health services management and financial reimbursement.

  8. Tactile communication, cooperation, and performance: an ethological study of the NBA.

    PubMed

    Kraus, Michael W; Huang, Cassey; Keltner, Dacher

    2010-10-01

    Tactile communication, or physical touch, promotes cooperation between people, communicates distinct emotions, soothes in times of stress, and is used to make inferences of warmth and trust. Based on this conceptual analysis, we predicted that in group competition, physical touch would predict increases in both individual and group performance. In an ethological study, we coded the touch behavior of players from the National Basketball Association (NBA) during the 2008-2009 regular season. Consistent with hypotheses, early season touch predicted greater performance for individuals as well as teams later in the season. Additional analyses confirmed that touch predicted improved performance even after accounting for player status, preseason expectations, and early season performance. Moreover, coded cooperative behaviors between teammates explained the association between touch and team performance. Discussion focused on the contributions touch makes to cooperative groups and the potential implications for other group settings. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  9. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    PubMed Central

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin; Tuskan, Gerald A; Yang, Xiaohan

    2017-01-01

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants and discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs. PMID:28698797

  10. Sixteen years of ICPC use in Norwegian primary care: looking through the facts

    PubMed Central

    2010-01-01

    Background The International Classification for Primary Care (ICPC) standard aims to facilitate simultaneous and longitudinal comparisons of clinical primary care practice within and across country borders; it is also used for administrative purposes. This study evaluates the use of the original ICPC-1 and the more complete ICPC-2 Norwegian versions in electronic patient records. Methods We performed a retrospective study of approximately 1.5 million ICPC codes and diagnoses that were collected over a 16-year period at 12 primary care sites in Norway. In the first phase of this period (transition phase, 1992-1999) physicians were allowed to not use an ICPC code in their practice while in the second phase (regular phase, 2000-2008) the use of an ICPC code was mandatory. The ICPC codes and diagnoses defined a problem event for each patient in the PROblem-oriented electronic MEDical record (PROMED). The main outcome measure of our analysis was the percentage of problem events in PROMEDs with inappropriate (or missing) ICPC codes and of diagnoses that did not map the latest ICPC-2 classification. Specific problem areas (pneumonia, anaemia, tonsillitis and diabetes) were examined in the same context. Results Codes were missing in 6.2% of the problem events; incorrect codes were observed in 4.0% of the problem events and text mismatch between the diagnoses and the expected ICPC-2 diagnoses text in 53.8% of the problem events. Missing codes were observed only during the transition phase while incorrect and inappropriate codes were used all over the 16-year period. The physicians created diagnoses that did not exist in ICPC. These 'new' diagnoses were used with varying frequency; many of them were used only once. Inappropriate ICPC-2 codes were also observed in the selected problem areas and for both phases. Conclusions Our results strongly suggest that physicians did not adhere to the ICPC standard due to its incompleteness, i.e. lack of many clinically important diagnoses. This indicates that ICPC is inappropriate for the classification of problem events and the clinical practice in primary care. PMID:20181271

  11. Differential Activation of Fast-Spiking and Regular-Firing Neuron Populations During Movement and Reward in the Dorsal Medial Frontal Cortex

    PubMed Central

    Insel, Nathan; Barnes, Carol A.

    2015-01-01

    The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585

  12. Regulatory sequence analysis tools.

    PubMed

    van Helden, Jacques

    2003-07-01

    The web resource Regulatory Sequence Analysis Tools (RSAT) (http://rsat.ulb.ac.be/rsat) offers a collection of software tools dedicated to the prediction of regulatory sites in non-coding DNA sequences. These tools include sequence retrieval, pattern discovery, pattern matching, genome-scale pattern matching, feature-map drawing, random sequence generation and other utilities. Alternative formats are supported for the representation of regulatory motifs (strings or position-specific scoring matrices) and several algorithms are proposed for pattern discovery. RSAT currently holds >100 fully sequenced genomes and these data are regularly updated from GenBank.

  13. General Reevaluation and Supplement to Environmental Impact Statement for Flood Control and Related Purposes. Red and Red Lake Rivers at East Grand Forks, Minnesota.

    DTIC Science & Technology

    1984-11-01

    ORGANIZATION (if applicable) 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK IWORK UNIT ELEMENT NO. NO. NO...participate in tne project. The city has also entered the regular phase of tne National Flood Insurance program adopted 23 September 1977. The State ’V of...releases It o Possible sites outside area of city control/ during periods of low flow. responsibility. -s Red Lake Watersned District has a current program

  14. The Clemson University, University Research Initiative Program in Discrete Mathematics and Computational Analysis

    DTIC Science & Technology

    1990-03-01

    Assmus, E. F., and J. D. Key, "Affine and projective planes", to appear in Discrete Math (Special Coding Theory Issue). 5. Assumus, E. F. and J. D...S. Locke, ’The subchromatic number of a graph", Discrete Math . 74 (1989)33-49. 24. Hedetniemi, S. T., and T. V. Wimer, "K-terminal recursive families...34Designs and geometries with Cayley", submitted to Journal of Symbolic Computation. 34. Key, J. D., "Regular sets in geometries", Annals of Discrete Math . 37

  15. XMM-Newton Mobile Web Application

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Kennedy, M.; Rodríguez, P.; Hernández, C.; Saxton, R.; Gabriel, C.

    2013-10-01

    We present the first XMM-Newton web mobile application, coded using new web technologies such as HTML5, the Query mobile framework, and D3 JavaScript data-driven library. This new web mobile application focuses on re-formatted contents extracted directly from the XMM-Newton web, optimizing the contents for mobile devices. The main goals of this development were to reach all kind of handheld devices and operating systems, while minimizing software maintenance. The application therefore has been developed as a web mobile implementation rather than a more costly native application. New functionality will be added regularly.

  16. A pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun

    1988-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  17. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    PubMed

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  18. In-plane crashworthiness of bio-inspired hierarchical honeycombs

    DOE PAGES

    Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio; ...

    2018-03-13

    Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less

  19. In-plane crashworthiness of bio-inspired hierarchical honeycombs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio

    Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less

  20. Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.

    PubMed

    Wang, Xiudong; Gu, Yuantao

    2017-05-10

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

  1. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    PubMed

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  2. Genomic structure of two ras family genes in the slime mold Physarum polycephalum.

    PubMed

    Trzcińska-Danielewicz, Joanna; Kozlowski, Piotr; Gierdal, Katarzyna; Wiejak, Jolanta; Jagielski, Adam; Toczko, Kazimierz; Fronk, Jan

    2002-08-01

    Genomic structure of two Physarum polycephalum ras family genes, Ppras2 and Pprap1, has been determined, including the upstream region of the latter. The genes are interrupted by three and four introns, respectively. The first intron of Ppras2 has the same location within the coding sequence as the first intron in another ras homolog from this organism, Ppras1 [Trzcińska-Danielewicz, J., Kozlowski, P., and Toczko, K. (1996). "Cloning and genomic sequence of the Physarum polycephalum Ppras1 gene, a homologue of the ras protooncogene", Gene 169, pp. 143-144]. All introns, ranging from 53 to ca. 460 base pairs, have the canonical 5' and 3' ends, are greatly enriched in pyrimidines in the coding strand and have frequent pyrimidines-only tracts. These latter features seem to be responsible for the difficulties in cloning and sequencing of parts of these genes. Short sequences shared with P. polycephalum transposon-like repeats are common in the introns, indicating a possible role of transposition in intron evolution. In all three ras family genes phase zero introns are located mostly between sequences coding for regular protein secondary structure elements.

  3. Informed consent in human subject research: a comparison of current international and Nigerian guidelines.

    PubMed

    Fadare, Joseph O; Porteri, Corinna

    2010-03-01

    Informed consent is a basic requirement for the conduct of ethical research involving human subjects. Currently, the Helsinki Declaration of the World Medical Association and the International Ethical Guidelines for Biomedical Research of the Council for International Organizations of Medical Sciences (CIOMS) are widely accepted as international codes regulating human subject research and the informed consent sections of these documents are quite important. Debates on the applicability of these guidelines in different socio-cultural settings are ongoing and many workers have advocated the need for national or regional guidelines. Nigeria, a developing country, has recently adopted its national guideline regulating human subject research: the National Health Research Ethics Committee (NHREC) code. A content analysis of the three guidelines was done to see if the Nigerian guidelines confer any additional protection for research subjects. The concept of a Community Advisory Committee in the Nigerian guideline is a novel one that emphasizes research as a community burden and should promote a form of "research friendship" to foster the welfare of research participants. There is also the need for a regular update of the NHREC code so as to address some issues that were not considered in its current version.

  4. User input verification and test driven development in the NJOY21 nuclear data processing code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Conlin, Jeremy Lloyd; McCartney, Austin Paul

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, andmore » capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.« less

  5. Motivators, barriers, and beliefs regarding physical activity in an older adult population.

    PubMed

    Costello, Ellen; Kafchinski, Marcia; Vrazel, JoEllen; Sullivan, Patricia

    2011-01-01

    Regular physical activity (PA) plays an important role in improving and maintaining one's health, especially as one ages. Although many older Americans are aware of the benefits of regular PA, the majority do not participate in regular PA that meets recommended guidelines. The purpose of this study was to gain insight into the motivators, barriers, and beliefs regarding PA of independent-living older adults with easy access to fitness facilities. In this qualitative design, focus group interviews were used to explore the individual perceptions of physically active and inactive older adults regarding PA and exercise. Thirty-one older adults, over age 60 participated in focus group discussions regarding PA beliefs and behaviors. Groups were homogenous based on current PA behaviors. Demographic information was collected. Discussions were audiotaped and transcribed verbatim and deidentified. Two researchers independently coded for emergent themes. Interrater reliability was established (κ = 0.89). Peer review was used to further ensure trustworthiness and credibility. No significant differences were noted in age, body mass index, or educational levels between the physically active and inactive groups. Differences in perceptions were noted between the groups regarding the construct of PA, barriers to participation in regular PA, and the components of an ideal PA program. Physically inactive persons had much lower fitness expectations of a physically active older adult, more perceived barriers to regular PA, and required individual tailoring of a PA program if they were going to participate. In addition, inactive persons were intimidated by the fitness facilities and concerned about slowing others down in a group exercise setting. Both groups shared similar motivators to participate in PA, such as maintaining health and socialization; however, inactive persons also described PA as needing to be purposeful and fun. Physically inactive persons perceived themselves to be physically active, as their perception of PA was grounded in a social context. Although both groups shared some barriers to regular PA participation, physically active individuals developed strategies to overcome them. Issues relating to self-efficacy and stages of change need to be explored to address the individual perceptions and needs of inactive older adults if initiation or long-term adherence to a PA program is to be achieved.

  6. Integration of the radiation belt environment model into the space weather modeling framework

    NASA Astrophysics Data System (ADS)

    Glocer, A.; Toth, G.; Fok, M.; Gombosi, T.; Liemohn, M.

    2009-11-01

    We have integrated the Fok radiation belt environment (RBE) model into the space weather modeling framework (SWMF). RBE is coupled to the global magnetohydrodynamics component (represented by the Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme, BATS-R-US, code) and the Ionosphere Electrodynamics component of the SWMF, following initial results using the Weimer empirical model for the ionospheric potential. The radiation belt (RB) model solves the convection-diffusion equation of the plasma in the energy range of 10 keV to a few MeV. In stand-alone mode RBE uses Tsyganenko's empirical models for the magnetic field, and Weimer's empirical model for the ionospheric potential. In the SWMF the BATS-R-US model provides the time dependent magnetic field by efficiently tracing the closed magnetic field-lines and passing the geometrical and field strength information to RBE at a regular cadence. The ionosphere electrodynamics component uses a two-dimensional vertical potential solver to provide new potential maps to the RBE model at regular intervals. We discuss the coupling algorithm and show some preliminary results with the coupled code. We run our newly coupled model for periods of steady solar wind conditions and compare our results to the RB model using an empirical magnetic field and potential model. We also simulate the RB for an active time period and find that there are substantial differences in the RB model results when changing either the magnetic field or the electric field, including the creation of an outer belt enhancement via rapid inward transport on the time scale of tens of minutes.

  7. An emergence of coordinated communication in populations of agents.

    PubMed

    Kvasnicka, V; Pospichal, J

    1999-01-01

    The purpose of this article is to demonstrate that coordinated communication spontaneously emerges in a population composed of agents that are capable of specific cognitive activities. Internal states of agents are characterized by meaning vectors. Simple neural networks composed of one layer of hidden neurons perform cognitive activities of agents. An elementary communication act consists of the following: (a) two agents are selected, where one of them is declared the speaker and the other the listener; (b) the speaker codes a selected meaning vector onto a sequence of symbols and sends it to the listener as a message; and finally, (c) the listener decodes this message into a meaning vector and adapts his or her neural network such that the differences between speaker and listener meaning vectors are decreased. A Darwinian evolution enlarged by ideas from the Baldwin effect and Dawkins' memes is simulated by a simple version of an evolutionary algorithm without crossover. The agent fitness is determined by success of the mutual pairwise communications. It is demonstrated that agents in the course of evolution gradually do a better job of decoding received messages (they are closer to meaning vectors of speakers) and all agents gradually start to use the same vocabulary for the common communication. Moreover, if agent meaning vectors contain regularities, then these regularities are manifested also in messages created by agent speakers, that is, similar parts of meaning vectors are coded by similar symbol substrings. This observation is considered a manifestation of the emergence of a grammar system in the common coordinated communication.

  8. Various Approaches to Forward and Inverse Wide-Angle Seismic Modelling Tested on Data from DOBRE-4 Experiment

    NASA Astrophysics Data System (ADS)

    Janik, Tomasz; Środa, Piotr; Czuba, Wojciech; Lysynchuk, Dmytro

    2016-12-01

    The interpretation of seismic refraction and wide angle reflection data usually involves the creation of a velocity model based on an inverse or forward modelling of the travel times of crustal and mantle phases using the ray theory approach. The modelling codes differ in terms of model parameterization, data used for modelling, regularization of the result, etc. It is helpful to know the capabilities, advantages and limitations of the code used compared to others. This work compares some popular 2D seismic modelling codes using the dataset collected along the seismic wide-angle profile DOBRE-4, where quite peculiar/uncommon reflected phases were observed in the wavefield. The 505 km long profile was realized in southern Ukraine in 2009, using 13 shot points and 230 recording stations. Double PMP phases with a different reduced time (7.5-11 s) and a different apparent velocity, intersecting each other, are observed in the seismic wavefield. This is the most striking feature of the data. They are interpreted as reflections from strongly dipping Moho segments with an opposite dip. Two steps were used for the modelling. In the previous work by Starostenko et al. (2013), the trial-and-error forward model based on refracted and reflected phases (SEIS83 code) was published. The interesting feature is the high-amplitude (8-17 km) variability of the Moho depth in the form of downward and upward bends. This model is compared with results from other seismic inversion methods: the first arrivals tomography package FAST based on first arrivals; the JIVE3D code, which can also use later refracted arrivals and reflections; and the forward and inversion code RAYINVR using both refracted and reflected phases. Modelling with all the codes tested showed substantial variability of the Moho depth along the DOBRE-4 profile. However, SEIS83 and RAYINVR packages seem to give the most coincident results.

  9. The Purine Bias of Coding Sequences is Determined by Physicochemical Constraints on Proteins.

    PubMed

    Ponce de Leon, Miguel; de Miranda, Antonio Basilio; Alvarez-Valin, Fernando; Carels, Nicolas

    2014-01-01

    For this report, we analyzed protein secondary structures in relation to the statistics of three nucleotide codon positions. The purpose of this investigation was to find which properties of the ribosome, tRNA or protein level, could explain the purine bias (Rrr) as it is observed in coding DNA. We found that the Rrr pattern is the consequence of a regularity (the codon structure) resulting from physicochemical constraints on proteins and thermodynamic constraints on ribosomal machinery. The physicochemical constraints on proteins mainly come from the hydropathy and molecular weight (MW) of secondary structures as well as the energy cost of amino acid synthesis. These constraints appear through a network of statistical correlations, such as (i) the cost of amino acid synthesis, which is in favor of a higher level of guanine in the first codon position, (ii) the constructive contribution of hydropathy alternation in proteins, (iii) the spatial organization of secondary structure in proteins according to solvent accessibility, (iv) the spatial organization of secondary structure according to amino acid hydropathy, (v) the statistical correlation of MW with protein secondary structures and their overall hydropathy, (vi) the statistical correlation of thymine in the second codon position with hydropathy and the energy cost of amino acid synthesis, and (vii) the statistical correlation of adenine in the second codon position with amino acid complexity and the MW of secondary protein structures. Amino acid physicochemical properties and functional constraints on proteins constitute a code that is translated into a purine bias within the coding DNA via tRNAs. In that sense, the Rrr pattern within coding DNA is the effect of information transfer on nucleotide composition from protein to DNA by selection according to the codon positions. Thus, coding DNA structure and ribosomal machinery co-evolved to minimize the energy cost of protein coding given the functional constraints on proteins.

  10. Classification of mislabelled microarrays using robust sparse logistic regression.

    PubMed

    Bootkrajang, Jakramate; Kabán, Ata

    2013-04-01

    Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.

  11. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  12. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  13. Transcriptome interrogation of human myometrium identifies differentially expressed sense-antisense pairs of protein-coding and long non-coding RNA genes in spontaneous labor at term

    PubMed Central

    Romero, Roberto; Tarca, Adi; Chaemsaithong, Piya; Miranda, Jezid; Chaiworapongsa, Tinnakorn; Jia, Hui; Hassan, Sonia S.; Kalita, Cynthia A.; Cai, Juan; Yeo, Lami; Lipovich, Leonard

    2014-01-01

    Objective The mechanisms responsible for normal and abnormal parturition are poorly understood. Myometrial activation leading to regular uterine contractions is a key component of labor. Dysfunctional labor (arrest of dilatation and/or descent) is a leading indication for cesarean delivery. Compelling evidence suggests that most of these disorders are functional in nature, and not the result of cephalopelvic disproportion. The methodology and the datasets afforded by the post-genomic era provide novel opportunities to understand and target gene functions in these disorders. In 2012, the ENCODE Consortium elucidated the extraordinary abundance and functional complexity of long non-coding RNA genes in the human genome. The purpose of the study was to identify differentially expressed long non-coding RNA genes in human myometrium in women in spontaneous labor at term. Materials and Methods Myometrium was obtained from women undergoing cesarean deliveries who were not in labor (n=19) and women in spontaneous labor at term (n=20). RNA was extracted and profiled using an Illumina® microarray platform. The analysis of the protein coding genes from this study has been previously reported. Here, we have used computational approaches to bound the extent of long non-coding RNA representation on this platform, and to identify co-differentially expressed and correlated pairs of long non-coding RNA genes and protein-coding genes sharing the same genomic loci. Results Upon considering more than 18,498 distinct lncRNA genes compiled nonredundantly from public experimental data sources, and interrogating 2,634 that matched Illumina microarray probes, we identified co-differential expression and correlation at two genomic loci that contain coding-lncRNA gene pairs: SOCS2-AK054607 and LMCD1-NR_024065 in women in spontaneous labor at term. This co-differential expression and correlation was validated by qRT-PCR, an independent experimental method. Intriguingly, one of the two lncRNA genes differentially expressed in term labor had a key genomic structure element, a splice site that lacked evolutionary conservation beyond primates. Conclusions We provide for the first time evidence for coordinated differential expression and correlation of cis-encoded antisense lncRNAs and protein-coding genes with known, as well as novel roles in pregnancy in the myometrium of women in spontaneous labor at term. PMID:24168098

  14. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  15. Regular Cycles of Forward and Backward Signal Propagation in Prefrontal Cortex and in Consciousness

    PubMed Central

    Werbos, Paul J.; Davis, Joshua J. J.

    2016-01-01

    This paper addresses two fundamental questions: (1) Is it possible to develop mathematical neural network models which can explain and replicate the way in which higher-order capabilities like intelligence, consciousness, optimization, and prediction emerge from the process of learning (Werbos, 1994, 2016a; National Science Foundation, 2008)? and (2) How can we use and test such models in a practical way, to track, to analyze and to model high-frequency (≥ 500 hz) many-channel data from recording the brain, just as econometrics sometimes uses models grounded in the theory of efficient markets to track real-world time-series data (Werbos, 1990)? This paper first reviews some of the prior work addressing question (1), and then reports new work performed in MATLAB analyzing spike-sorted and burst-sorted data on the prefrontal cortex from the Buzsaki lab (Fujisawa et al., 2008, 2015) which is consistent with a regular clock cycle of about 153.4 ms and with regular alternation between a forward pass of network calculations and a backwards pass, as in the general form of the backpropagation algorithm which one of us first developed in the period 1968–1974 (Werbos, 1994, 2006; Anderson and Rosenfeld, 1998). In business and finance, it is well known that adjustments for cycles of the year are essential to accurate prediction of time-series data (Box and Jenkins, 1970); in a similar way, methods for identifying and using regular clock cycles offer large new opportunities in neural time-series analysis. This paper demonstrates a few initial footprints on the large “continent” of this type of neural time-series analysis, and discusses a few of the many further possibilities opened up by this new approach to “decoding” the neural code (Heller et al., 1995). PMID:27965547

  16. Regular Cycles of Forward and Backward Signal Propagation in Prefrontal Cortex and in Consciousness.

    PubMed

    Werbos, Paul J; Davis, Joshua J J

    2016-01-01

    This paper addresses two fundamental questions: (1) Is it possible to develop mathematical neural network models which can explain and replicate the way in which higher-order capabilities like intelligence, consciousness, optimization, and prediction emerge from the process of learning (Werbos, 1994, 2016a; National Science Foundation, 2008)? and (2) How can we use and test such models in a practical way, to track, to analyze and to model high-frequency (≥ 500 hz) many-channel data from recording the brain, just as econometrics sometimes uses models grounded in the theory of efficient markets to track real-world time-series data (Werbos, 1990)? This paper first reviews some of the prior work addressing question (1), and then reports new work performed in MATLAB analyzing spike-sorted and burst-sorted data on the prefrontal cortex from the Buzsaki lab (Fujisawa et al., 2008, 2015) which is consistent with a regular clock cycle of about 153.4 ms and with regular alternation between a forward pass of network calculations and a backwards pass, as in the general form of the backpropagation algorithm which one of us first developed in the period 1968-1974 (Werbos, 1994, 2006; Anderson and Rosenfeld, 1998). In business and finance, it is well known that adjustments for cycles of the year are essential to accurate prediction of time-series data (Box and Jenkins, 1970); in a similar way, methods for identifying and using regular clock cycles offer large new opportunities in neural time-series analysis. This paper demonstrates a few initial footprints on the large "continent" of this type of neural time-series analysis, and discusses a few of the many further possibilities opened up by this new approach to "decoding" the neural code (Heller et al., 1995).

  17. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    PubMed

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  18. Increased uptake of cervical screening by women with HIV infection in Auckland regardless of ethnicity, requirement for an interpreter or level of education.

    PubMed

    Lowe, Michele; Handy, Rupert; Ingram, Joan; Nisbet, Mitzi; Ritchie, Stephen; Thomas, Mark; Briggs, Simon

    2016-07-15

    Current guidelines recommend that women with HIV infection receive annual cervical smears. We evaluated the uptake of annual cervical smears by women with HIV infection under the care of the Infectious Disease Service at Auckland City Hospital. In an attempt to identify potential barriers to regularly receiving an annual cervical smear, we invited the women to complete a questionnaire. The responses from women who had regularly received an annual cervical smear were compared with those who had not. The proportion of women who had received a cervical smear increased from 44% in 2001, to 73% in 2010 (p=0.001). Ninety-three women (76%) completed the study questionnaire. No statistically significant differences were found in the questionnaire responses between the women who had regularly received an annual cervical smear and those who had not. The proportion of women in this cohort who received a cervical smear in 2010 is comparable with other studies of women with HIV infection in New Zealand and overseas. We have not been able to identify barriers that prevent women with HIV infection in Auckland regularly receiving an annual cervical smear. We plan to encourage women who have not received a cervical smear in the previous 2-year period to have a cervical smear performed when they attend the Infectious Disease Clinic, and will continue to notify the National Cervical Screening Programme that all women who are newly diagnosed with HIV infection should have an annual recall code attached to future cervical smear reports. We expect that these interventions will further increase the proportion of women with HIV infection in Auckland who receive an annual cervical smear.

  19. The Study on Network Examinational Database based on ASP Technology

    NASA Astrophysics Data System (ADS)

    Zhang, Yanfu; Han, Yuexiao; Zhou, Yanshuang

    This article introduces the structure of the general test base system based on .NET technology, discussing the design of the function modules and its implementation methods. It focuses on key technology of the system, proposing utilizing the WEB online editor control to solve the input problem and regular expression to solve the problem HTML code, making use of genetic algorithm to optimize test paper and the automated tools of WORD to solve the problem of exporting papers and others. Practical effective design and implementation technology can be used as reference for the development of similar systems.

  20. Ethics and Childbirth Educators: Do Your Values Cause You Ethical Distress?

    PubMed Central

    Ondeck, Michele

    2009-01-01

    The Code of Ethics for Lamaze Certified Childbirth Educators outlines the ethical principles and standards that are derived from childbirth education's core values to assure quality and ethical practice. This article presents a summary of the history of ethics and medical ethics that informs a value-oriented decision-making process in childbirth education. The role of evidence in ethics is explored from the childbirth educator's viewpoint, and scenarios are used to reflect on situations that are examples of ethical distress. The conclusion is that the practice of ethics and ethical decision making includes regular reflection. PMID:19436591

  1. Ethics and childbirth educators: do your values cause you ethical distress?

    PubMed

    Ondeck, Michele

    2009-01-01

    The Code of Ethics for Lamaze Certified Childbirth Educators outlines the ethical principles and standards that are derived from childbirth education's core values to assure quality and ethical practice. This article presents a summary of the history of ethics and medical ethics that informs a value-oriented decision-making process in childbirth education. The role of evidence in ethics is explored from the childbirth educator's viewpoint, and scenarios are used to reflect on situations that are examples of ethical distress. The conclusion is that the practice of ethics and ethical decision making includes regular reflection.

  2. History of the Medical Library Association's credentialing program.

    PubMed Central

    Bell, J A

    1996-01-01

    Since the Medical Library Association (MLA) adopted the Code for the Training and Certification of Medical Librarians in 1949, MLA members have reviewed and revised the program regularly. This paper traces the history of MLA's professional recognition program to illustrate how the program has changed over time and to identify the issues that have surrounded it. These issues include the value of the program to individual members, cost to MLA, appropriate entry requirements, certification examinations, and recertification requirements. The development and operation of MLA's current credentialing program, the Academy of Health Information Professionals, is described in detail. PMID:8883980

  3. Evaluation of Swift Start TCP in Long-Delay Environment

    NASA Technical Reports Server (NTRS)

    Lawas-Grodek, Frances J.; Tran, Diepchi T.

    2004-01-01

    This report presents the test results of the Swift Start algorithm in single-flow and multiple-flow testbeds under the effects of high propagation delays, various slow bottlenecks, and small queue sizes. Although this algorithm estimates capacity and implements packet pacing, the findings were that in a heavily congested link, the Swift Start algorithm will not be applicable. The reason is that the bottleneck estimation is falsely influenced by timeouts induced by retransmissions and the expiration of delayed acknowledgment (ACK) timers, thus causing the modified Swift Start code to fall back to regular transmission control protocol (TCP).

  4. Mod3DMT and EMTF: Free Software for MT Data Processing and Inversion

    NASA Astrophysics Data System (ADS)

    Egbert, G. D.; Kelbert, A.; Meqbel, N. M.

    2017-12-01

    "ModEM" was developed at Oregon State University as a modular system for inversion of electromagnetic (EM) geophysical data (Egbert and Kelbert, 2012; Kelbert et al., 2014). Although designed for more general (frequency domain) EM applications, and originally intended as a testbed for exploring inversion search and regularization strategies, our own initial uses of ModEM were for 3-D imaging of the deep crust and upper mantle at large scales. Since 2013 we have offered a version of the source code suitable for 3D magnetotelluric (MT) inversion on an "as is, user beware" basis for free for non-commercial applications. This version, which we refer to as Mod3DMT, has since been widely used by the international MT community. Over 250 users have registered to download the source code, and at least 50 MT studies in the refereed literature, covering locations around the globe at a range of spatial scales, cite use of ModEM for 3D inversion. For over 30 years I have also made MT processing software available for free use. In this presentation, I will discuss my experience with these freely available (but perhaps not truly open-source) computer codes. Although users are allowed to make modifications to the codes (on conditions that they provide a copy of the modified version) only a handful of users have tried to make any modification, and only rarely are modifications even reported, much less provided back to the developers.

  5. The Increased Sensitivity of Irregular Peripheral Canal and Otolith Vestibular Afferents Optimizes their Encoding of Natural Stimuli

    PubMed Central

    Schneider, Adam D.; Jamali, Mohsen; Carriot, Jerome; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory input is essential for an organism's survival. A growing body of evidence suggests that sensory systems have developed coding strategies that are constrained by the statistics of the natural environment. Consequently, it is necessary to first characterize neural responses to natural stimuli to uncover the coding strategies used by a given sensory system. Here we report for the first time the statistics of vestibular rotational and translational stimuli experienced by rhesus monkeys during natural (e.g., walking, grooming) behaviors. We find that these stimuli can reach intensities as high as 1500 deg/s and 8 G. Recordings from afferents during naturalistic rotational and linear motion further revealed strongly nonlinear responses in the form of rectification and saturation, which could not be accurately predicted by traditional linear models of vestibular processing. Accordingly, we used linear–nonlinear cascade models and found that these could accurately predict responses to naturalistic stimuli. Finally, we tested whether the statistics of natural vestibular signals constrain the neural coding strategies used by peripheral afferents. We found that both irregular otolith and semicircular canal afferents, because of their higher sensitivities, were more optimized for processing natural vestibular stimuli as compared with their regular counterparts. Our results therefore provide the first evidence supporting the hypothesis that the neural coding strategies used by the vestibular system are matched to the statistics of natural stimuli. PMID:25855169

  6. Non-coding RNAs and exercise: pathophysiological role and clinical application in the cardiovascular system.

    PubMed

    Gomes, Clarissa P C; de Gonzalo-Calvo, David; Toro, Rocio; Fernandes, Tiago; Theisen, Daniel; Wang, Da-Zhi; Devaux, Yvan

    2018-05-23

    There is overwhelming evidence that regular exercise training is protective against cardiovascular disease (CVD), the main cause of death worldwide. Despite the benefits of exercise, the intricacies of their underlying molecular mechanisms remain largely unknown. Non-coding RNAs (ncRNAs) have been recognized as a major regulatory network governing gene expression in several physiological processes and appeared as pivotal modulators in a myriad of cardiovascular processes under physiological and pathological conditions. However, little is known about ncRNA expression and role in response to exercise. Revealing the molecular components and mechanisms of the link between exercise and health outcomes will catalyse discoveries of new biomarkers and therapeutic targets. Here we review the current understanding of the ncRNA role in exercise-induced adaptations focused on the cardiovascular system and address their potential role in clinical applications for CVD. Finally, considerations and perspectives for future studies will be proposed. © 2018 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.

  7. Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection.

    PubMed

    Zhu, Xiaofeng; Li, Xuelong; Zhang, Shichao; Ju, Chunhua; Wu, Xindong

    2017-06-01

    In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of k -nearest neighbor classification performance.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berra, P.B.; Chung, S.M.; Hachem, N.I.

    This article presents techniques for managing a very large data/knowledge base to support multiple inference-mechanisms for logic programming. Because evaluation of goals can require accessing data from the extensional database, or EDB, in very general ways, one must often resort to indexing on all fields of the extensional database facts. This presents a formidable management problem in that the index data may be larger than the EDB itself. This problem becomes even more serious in this case of very large data/knowledge bases (hundreds of gigabytes), since considerably more hardware will be required to process and store the index data. Inmore » order to reduce the amount of index data considerably without losing generality, the authors form a surrogate file, which is a hashing transformation of the facts. Superimposed code words (SCW), concatenated code words (CCW), and transformed inverted lists (TIL) are possible structures for the surrogate file. since these transformations are quite regular and compact, the authors consider possible computer architecture for the processing of the surrogate file.« less

  9. Grid scale drives the scale and long-term stability of place maps

    PubMed Central

    Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M

    2018-01-01

    Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607

  10. Unsupervised Transfer Learning via Multi-Scale Convolutional Sparse Coding for Biomedical Applications

    PubMed Central

    Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua

    2017-01-01

    The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148

  11. What makes computational open source software libraries successful?

    NASA Astrophysics Data System (ADS)

    Bangerth, Wolfgang; Heister, Timo

    2013-01-01

    Software is the backbone of scientific computing. Yet, while we regularly publish detailed accounts about the results of scientific software, and while there is a general sense of which numerical methods work well, our community is largely unaware of best practices in writing the large-scale, open source scientific software upon which our discipline rests. This is particularly apparent in the commonly held view that writing successful software packages is largely the result of simply ‘being a good programmer’ when in fact there are many other factors involved, for example the social skill of community building. In this paper, we consider what we have found to be the necessary ingredients for successful scientific software projects and, in particular, for software libraries upon which the vast majority of scientific codes are built today. In particular, we discuss the roles of code, documentation, communities, project management and licenses. We also briefly comment on the impact on academic careers of engaging in software projects.

  12. Harassment as an Ethics Issue

    NASA Astrophysics Data System (ADS)

    Holmes, Mary Anne; Marin-Spiotta, Erika; Schneider, Blair

    2017-04-01

    Harassment, sexual and otherwise, including bullying and discrimination, remains an ongoing problem in the science workforce. In response to monthly revelations of harassment in academic science in the U.S. in 2016, the American Geophysical Union (AGU) convened a workshop to discuss strategies for professional societies to address this pernicious practice. Participants included researchers on this topic and members from professional science societies, academia, and U.S. federal government agencies. We agreed on the following principles: - Harassment, discrimination and bullying most often occur between a superior (e.g., an advisor, professor, supervisor) and a student or early career professional, representing a power difference that disadvantages the less-powerful scientist. - Harassment drives excellent potential as well as current scientists from the field who would otherwise contribute to the advancement of science, engineering and technology. - Harassment, therefore, represents a form of scientific misconduct, and should be treated as plagiarism, falsification, and other forms of scientific misconduct are treated, with meaningful consequences. To address harassment and to change the culture of science, professional societies can and should: ensure that their Code of Ethics and/or Code of Conduct addresses harassment with clear definitions of what constitutes this behavior, including in academic, professional, conference and field settings; provide a clear and well-disseminated mechanism for reporting violations to the society; have a response person or team in the society that can assist those who feel affected by harassment; and provide a mechanism to revisit and update Codes on a regular basis. The Code should be disseminated widely to members and apply to all members and staff. A revised Code of Ethics is now being constructed by AGU, and will be ready for adoption in 2017. See http://harassment.agu.org/ for information updates.

  13. Rectified factor networks for biclustering of omics data.

    PubMed

    Clevert, Djork-Arné; Unterthiner, Thomas; Povysil, Gundula; Hochreiter, Sepp

    2017-07-15

    Biclustering has become a major tool for analyzing large datasets given as matrix of samples times features and has been successfully applied in life sciences and e-commerce for drug design and recommender systems, respectively. actor nalysis for cluster cquisition (FABIA), one of the most successful biclustering methods, is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 code units because of the high computational complexity of computing the posterior. Furthermore, code units are sometimes insufficiently decorrelated and sample membership is difficult to determine. We propose to use the recently introduced unsupervised Deep Learning approach Rectified Factor Networks (RFNs) to overcome the drawbacks of existing biclustering methods. RFNs efficiently construct very sparse, non-linear, high-dimensional representations of the input via their posterior means. RFN learning is a generalized alternating minimization algorithm based on the posterior regularization method which enforces non-negative and normalized posterior means. Each code unit represents a bicluster, where samples for which the code unit is active belong to the bicluster and features that have activating weights to the code unit belong to the bicluster. On 400 benchmark datasets and on three gene expression datasets with known clusters, RFN outperformed 13 other biclustering methods including FABIA. On data of the 1000 Genomes Project, RFN could identify DNA segments which indicate, that interbreeding with other hominins starting already before ancestors of modern humans left Africa. https://github.com/bioinf-jku/librfn. djork-arne.clevert@bayer.com or hochreit@bioinf.jku.at. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  14. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  15. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  16. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  17. Pseudospectral method for gravitational wave collapse

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Weyhausen, Andreas; Brügmann, Bernd

    2016-03-01

    We present a new pseudospectral code, bamps, for numerical relativity written with the evolution of collapsing gravitational waves in mind. We employ the first-order generalized harmonic gauge formulation. The relevant theory is reviewed, and the numerical method is critically examined and specialized for the task at hand. In particular, we investigate formulation parameters—gauge- and constraint-preserving boundary conditions well suited to nonvanishing gauge source functions. Different types of axisymmetric twist-free moment-of-time-symmetry gravitational wave initial data are discussed. A treatment of the axisymmetric apparent horizon condition is presented with careful attention to regularity on axis. Our apparent horizon finder is then evaluated in a number of test cases. Moving on to evolutions, we investigate modifications to the generalized harmonic gauge constraint damping scheme to improve conservation in the strong-field regime. We demonstrate strong-scaling of our pseudospectral penalty code. We employ the Cartoon method to efficiently evolve axisymmetric data in our 3 +1 -dimensional code. We perform test evolutions of the Schwarzschild spacetime perturbed by gravitational waves and by gauge pulses, both to demonstrate the use of our black-hole excision scheme and for comparison with earlier results. Finally, numerical evolutions of supercritical Brill waves are presented to demonstrate durability of the excision scheme for the dynamical formation of a black hole.

  18. Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.

    PubMed

    Standage, Dominic; Pare, Martin

    2018-06-27

    For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.

  19. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  20. Frequency of GP communication addressing the patient's resources and coping strategies in medical interviews: a video-based observational study.

    PubMed

    Mjaaland, Trond A; Finset, Arnstein

    2009-07-01

    There is increasing focus on patient-centred communicative approaches in medical consultations, but few studies have shown the extent to which patients' positive coping strategies and psychological assets are addressed by general practitioners (GPs) on a regular day at the office. This study measures the frequency of GPs' use of questions and comments addressing their patients' coping strategies or resources. Twenty-four GPs were video-recorded in 145 consultations. The consultations were coded using a modified version of the Roter Interaction Analysis System. In this study, we also developed four additional coding categories based on cognitive therapy and solution-focused therapy: attribution, resources, coping, and solution-focused techniques.The reliability between coders was established, a factor analysis was applied to test the relationship between the communication categories, and a tentative validating exercise was performed by reversed coding. Cohen's kappa was 0.52 between coders. Only 2% of the utterances could be categorized as resource or coping oriented. Six GPs contributed 59% of these utterances. The factor analysis identified two factors, one task oriented and one patient oriented. The frequency of communication about coping and resources was very low. Communication skills training for GPs in this field is required. Further validating studies of this kind of measurement tool are warranted.

Top