Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-08
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.
FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.
Zou, Ding; Djordjevic, Ivan B
2015-06-01
In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B
2012-04-09
Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations
2017-11-14
1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
Qu, Zhen; Djordjevic, Ivan B
2017-08-15
We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
Construction of a new regular LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng
2013-05-01
A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Fast QC-LDPC code for free space optical communication
NASA Astrophysics Data System (ADS)
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
Optical LDPC decoders for beyond 100 Gbits/s optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2009-05-01
We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Low-Density Parity-Check (LDPC) Codes Constructed from Protographs
NASA Astrophysics Data System (ADS)
Thorpe, J.
2003-08-01
We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry
2014-05-29
its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
ERIC Educational Resources Information Center
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.
Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko
2008-08-18
Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted
2010-09-13
We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).
Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks
2010-08-31
Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Error floor behavior study of LDPC codes for concatenated codes design
NASA Astrophysics Data System (ADS)
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
NASA Astrophysics Data System (ADS)
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Entanglement-assisted quantum quasicyclic low-density parity-check codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin
2017-03-01
This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
NASA Astrophysics Data System (ADS)
Gao, Jian; Wang, Yongkang
2018-01-01
Structural properties of u-constacyclic codes over the ring F_p+u{F}_p are given, where p is an odd prime and u^2=1. Under a special Gray map from F_p+u{F}_p to F_p^2, some new non-binary quantum codes are obtained by this class of constacyclic codes.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
An efficient decoding for low density parity check codes
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie
2009-12-01
Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-07-07
Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Unitals and ovals of symmetric block designs in LDPC and space-time coding
NASA Astrophysics Data System (ADS)
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
A Scalable Architecture of a Structured LDPC Decoder
NASA Technical Reports Server (NTRS)
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis
2008-01-01
Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.
Djordjevic, Ivan B
2007-04-02
The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.
NASA Astrophysics Data System (ADS)
Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki
We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Revathy, M.; Saravanan, R.
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Short-Block Protograph-Based LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
LDPC Codes--Structural Analysis and Decoding Techniques
ERIC Educational Resources Information Center
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHERTKOV, MICHAEL; STEPANOV, MIKHAIL
2007-01-10
The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
NASA Astrophysics Data System (ADS)
Ma, Fanghui; Gao, Jian; Fu, Fang-Wei
2018-06-01
Let R={F}_q+v{F}_q+v2{F}_q be a finite non-chain ring, where q is an odd prime power and v^3=v. In this paper, we propose two methods of constructing quantum codes from (α +β v+γ v2)-constacyclic codes over R. The first one is obtained via the Gray map and the Calderbank-Shor-Steane construction from Euclidean dual-containing (α +β v+γ v2)-constacyclic codes over R. The second one is obtained via the Gray map and the Hermitian construction from Hermitian dual-containing (α +β v+γ v2)-constacyclic codes over R. As an application, some new non-binary quantum codes are obtained.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
Strategic and Tactical Decision-Making Under Uncertainty
2006-01-03
message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18
NASA Astrophysics Data System (ADS)
Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.
2017-01-01
In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.
Pilotless Frame Synchronization Using LDPC Code Constraints
NASA Technical Reports Server (NTRS)
Jones, Christopher; Vissasenor, John
2009-01-01
A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
Design and implementation of a channel decoder with LDPC code
NASA Astrophysics Data System (ADS)
Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan
2008-12-01
Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.
Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian
2016-05-30
Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.
On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Xu, Lei; Wang, Ting
2010-12-01
Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
LDPC product coding scheme with extrinsic information for bit patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Statistical physics inspired energy-efficient coded-modulation for optical communications.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2012-04-15
Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America
Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.
Djordjevic, Ivan B
2010-05-01
I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.
On the optimum signal constellation design for high-speed optical transport networks.
Liu, Tao; Djordjevic, Ivan B
2012-08-27
In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.
Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B
2017-10-15
We experimentally demonstrate self-adaptive coded 5×100 Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Pryadko, Leonid
Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.
Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K
2017-12-04
Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.
An LDPC Decoder Architecture for Wireless Sensor Network Applications
Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724
An LDPC decoder architecture for wireless sensor network applications.
Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.
On the Existence of t-Identifying Codes in Undirected De Bruijn Networks
2015-08-04
remaining cases remain open. Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 15. SUBJECT TERMS...Additionally, we show that the eccentricity of the undirected non-binary de Bruijn graph is n. 1 Introduction and Background Let x ∈ V (G), and...we must have d(y, x) = n + 2. In other words, Theorem 2.5 tells us the eccentricity of every node in the graph B(d, n) is n for d ≥ 3, and so the
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai
Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes
NASA Astrophysics Data System (ADS)
Su, Hualing; He, Yucheng; Zhou, Lin
2017-08-01
In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
Nonlinear Demodulation and Channel Coding in EBPSK Scheme
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2009-01-01
Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
NASA Astrophysics Data System (ADS)
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs
NASA Astrophysics Data System (ADS)
Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong
2016-12-01
In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.
MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot
NASA Astrophysics Data System (ADS)
Daoud, Omar; Alani, Omar
This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Liu, Yanfang; Liao, Huidan; Liu, Ying; Guo, Juanjuan; Sun, Yi; Fu, Xiaoliang; Xiao, Ding; Cai, Jifeng; Lan, Lingmei; Xie, Pingli; Zha, Lagabaiyila
2017-04-01
Nonbinary single-nucleotide polymorphisms (SNPs) are potential forensic genetic markers because their discrimination power is greater than that of normal binary SNPs, and that they can detect highly degraded samples. We previously developed a nonbinary SNP multiplex typing assay. In this study, we selected additional 20 nonbinary SNPs from the NCBI SNP database and verified them through pyrosequencing. These 20 nonbinary SNPs were analyzed using the fluorescent-labeled SNaPshot multiplex SNP typing method. The allele frequencies and genetic parameters of these 20 nonbinary SNPs were determined among 314 unrelated individuals from Han populations from China. The total power of discrimination was 0.9999999999994, and the cumulative probability of exclusion was 0.9986. Moreover, the result of the combination of this 20 nonbinary SNP assay with the 20 nonbinary SNP assay we previously developed demonstrated that the cumulative probability of exclusion of the 40 nonbinary SNPs was 0.999991 and that no significant linkage disequilibrium was observed in all 40 nonbinary SNPs. Thus, we concluded that this new system consisting of new 20 nonbinary SNPs could provide highly informative polymorphic data which would be further used in forensic application and would serve as a potentially valuable supplement to forensic DNA analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Measurement Techniques for Clock Jitter
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Schlesinger, Adam
2012-01-01
NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.
Heck, Nicholas C; Mirabito, Lucas A; LeMaire, Kelly; Livingston, Nicholas A; Flentje, Annesa
2017-01-01
The current study examined the frequency with which randomized controlled trials (RCTs) of behavioral and psychological interventions for anxiety and depression include data pertaining to participant sexual orientation and nonbinary gender identities. Using systematic review methodology, the databases PubMed and PsycINFO were searched to identify RCTs published in 2004, 2009, and 2014. Random selections of 400 articles per database per year (2,400 articles in total) were considered for inclusion in the review. Articles meeting inclusion criteria were read and coded by the research team to identify whether the trial reported data pertaining to participant sexual orientation and nonbinary gender identities. Additional trial characteristics were also identified and indexed in our database (e.g., sample size, funding source). Of the 232 articles meeting inclusion criteria, only 1 reported participants' sexual orientation, and zero articles included nonbinary gender identities. A total of 52,769 participants were represented in the trials, 93 of which were conducted in the United States, and 43 acknowledged the National Institutes of Health as a source of funding. Despite known mental health disparities on the basis of sexual orientation and nonbinary gender identification, researchers evaluating interventions for anxiety and depression are not reporting on these important demographic characteristics. Reporting practices must change to ensure that our interventions generalize to lesbian, gay, bisexual, and transgender persons. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening
NASA Technical Reports Server (NTRS)
Dolinar, Sam
2005-01-01
We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage
NASA Technical Reports Server (NTRS)
Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.
2010-01-01
The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Accumulate-Repeat-Accumulate-Accumulate Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy
2007-01-01
Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)
2014-10-01
SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test
Nonlinear, nonbinary cyclic group codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
500 Gb/s free-space optical transmission over strong atmospheric turbulence channels.
Qu, Zhen; Djordjevic, Ivan B
2016-07-15
We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh
This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.
Research on Formation of Microsatellite Communication with Genetic Algorithm
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796
Research on formation of microsatellite communication with genetic algorithm.
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
NASA Astrophysics Data System (ADS)
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
"I would have preferred more options": accounting for non-binary youth in health research.
Frohard-Dourlent, Hélène; Dobson, Sarah; Clark, Beth A; Doull, Marion; Saewyc, Elizabeth M
2017-01-01
As a research team focused on vulnerable youth, we increasingly need to find ways to acknowledge non-binary genders in health research. Youth have become more vocal about expanding notions of gender beyond traditional categories of boy/man and girl/woman. Integrating non-binary identities into established research processes is a complex undertaking in a culture that often assumes gender is a binary variable. In this article, we present the challenges at every stage of the research process and questions we have asked ourselves to consider non-binary genders in our work. As researchers, how do we interrogate the assumptions that have made non-binary lives invisible? What challenges arise when attempting to transform research practices to incorporate non-binary genders? Why is it crucial that researchers consider these questions at each step of the research process? We draw on our own research experiences to highlight points of tensions and possibilities for change. Improving access to inclusive health-care for non-binary people, and non-binary youth in particular, is part of creating a more equitable healthcare system. We argue that increased and improved access to inclusive health-care can be supported by research that acknowledges and includes people of all genders. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Finite-connectivity spin-glass phase diagrams and low-density parity check codes.
Migliorini, Gabriele; Saad, David
2006-02-01
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
NASA Astrophysics Data System (ADS)
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors
2012-02-29
filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider
Joint Carrier-Phase Synchronization and LDPC Decoding
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban
2009-01-01
A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
System on a Chip Real-Time Emulation (SOCRE)
2006-09-01
code ) i Table of Contents Preface...emulation platform included LDPC decoders, A/V and radio applications Port BEE flow to Emulation Platforms, SOC Technologies One of the key tasks of the...Once the design has been described within Simulink, the designer runs the BEE design flow within Matlab using the bee_xps interface. At this point
NASA Astrophysics Data System (ADS)
Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.
2015-03-01
We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
Healthcare Experiences Among Young Adults Who Identify as Genderqueer or Nonbinary.
Lykens, James E; LeBlanc, Allen J; Bockting, Walter O
2018-04-01
Research on healthcare among gender-diverse populations has largely focused on people who describe their gender in binary terms, either as trans men or trans women. This qualitative study examined the healthcare experiences of young adults who identify as genderqueer or nonbinary (GQ/NB). Participants (N = 10) were interviewed about experiences seeking and accessing healthcare. All were young adults (ages 23-33) in the San Francisco Bay area who had accessed healthcare at least once in the prior 6 months. A semistructured interview guide elicited conversations about gender identity and experiences of healthcare. Interview transcripts were analyzed using emergent coding analysis to identify themes. Participants faced unique challenges even at clinics specializing in gender-affirming healthcare. They felt misunderstood by providers who approached them from a binary transgender perspective and consequently often did not receive care sensitive to nonbinary identities. In response to this perceived bias, participants sometimes "borrowed" a binary transgender label to receive care, modified the healthcare they were prescribed, or went without healthcare. The GQ/NB young adults in our study regularly felt disrespected and frustrated as they sought and accessed healthcare. Participants felt that the binary transgender narrative pressured them to conform to binary medical narratives throughout healthcare interactions. GQ/NB young adults have unique healthcare needs but often do not feel understood by their providers. There is a need for existing healthcare systems to serve GQ/NB young adults more effectively.
NASA Technical Reports Server (NTRS)
Scott, Peter J.
1989-01-01
ZED editing program for DEC VAX computer simple, powerful line editor for text, program source code, and nonbinary data. Excels in processing of text by use of procedure files. Also features versatile search qualifiers, global changes, conditionals, online help, hexadecimal mode, space compression, looping, logical combinations of search strings, journaling, visible control characters, and automatic detabbing. Users of Cambridge implementation devised such ZED procedures as chess games, calculators, and programs for evaluating pi. Written entirely in C.
Nonbinary Tree-Based Phylogenetic Networks.
Jetten, Laura; van Iersel, Leo
2018-01-01
Rooted phylogenetic networks are used to describe evolutionary histories that contain non-treelike evolutionary events such as hybridization and horizontal gene transfer. In some cases, such histories can be described by a phylogenetic base-tree with additional linking arcs, which can, for example, represent gene transfer events. Such phylogenetic networks are called tree-based. Here, we consider two possible generalizations of this concept to nonbinary networks, which we call tree-based and strictly-tree-based nonbinary phylogenetic networks. We give simple graph-theoretic characterizations of tree-based and strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each of these two classes that it can be decided in polynomial time whether a given network is contained in the class. Our approach also provides a new view on tree-based binary phylogenetic networks. Finally, we discuss two examples of nonbinary phylogenetic networks in biology and show how our results can be applied to them.
High performance and cost effective CO-OFDM system aided by polar code.
Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng
2017-02-06
A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.
Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-06-01
Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.
Performance of Low-Density Parity-Check Coded Modulation
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.
Future capabilities for the Deep Space Network
NASA Technical Reports Server (NTRS)
Berner, J. B.; Bryant, S. H.; Andrews, K. S.
2004-01-01
This paper will look at three new capabilities that are in different stages of development. First, turbo decoding, which provides improved telemetry performance for data rates up to about 1 Mbps, will be discussed. Next, pseudo-noise ranging will be presented. Pseudo-noise ranging has several advantages over the current sequential ranging, anmely easier operations, improved performance, and the capability to be used in a regenerative implementation on a spacecraft. Finally, Low Density Parity Check decoding will be discussed. LDPC codes can provide performance that matches or slightly exceed turbo codes, but are designed for use in the 10 Mbps range.
16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.
Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S
2012-05-21
We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting
2010-06-21
In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).
Frame Synchronization Without Attached Sync Markers
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2011-01-01
We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).
On the Complexity of Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-01-01
Duplication-Transfer-Loss (DTL) reconciliation has emerged as a powerful technique for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation takes as input a gene family phylogeny and the corresponding species phylogeny, and reconciles the two by postulating speciation, gene duplication, horizontal gene transfer, and gene loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. However, gene trees are frequently non-binary. With such non-binary gene trees, the reconciliation problem seeks to find a binary resolution of the gene tree that minimizes the reconciliation cost. Given the prevalence of non-binary gene trees, many efficient algorithms have been developed for this problem in the context of the simpler Duplication-Loss (DL) reconciliation model. Yet, no efficient algorithms exist for DTL reconciliation with non-binary gene trees and the complexity of the problem remains unknown. In this work, we resolve this open question by showing that the problem is, in fact, NP-hard. Our reduction applies to both the dated and undated formulations of DTL reconciliation. By resolving this long-standing open problem, this work will spur the development of both exact and heuristic algorithms for this important problem.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
NASA Astrophysics Data System (ADS)
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
NASA Astrophysics Data System (ADS)
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Trans*versing the DMZ: A Non-Binary Autoethnographic Exploration of Gender and Masculinity
ERIC Educational Resources Information Center
Stewart, Dafina-Lazarus
2017-01-01
Using an abductive, critical-poststructuralist autoethnographic approach, I consider the ways in which masculine of centre, non-binary/genderqueer trans* identities transverse the poles of socializing binary gender systems, structures, and norms which inform higher education. In this paper, I assert that non-binary genderqueer identities are…
van Iersel, Leo; Kelk, Steven; Lekić, Nela; Scornavacca, Celine
2014-05-05
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data.
High-Performance CCSDS AOS Protocol Implementation in FPGA
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson
2010-01-01
The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.
SCaN Network Ground Station Receiver Performance for Future Service Support
NASA Technical Reports Server (NTRS)
Estabrook, Polly; Lee, Dennis; Cheng, Michael; Lau, Chi-Wung
2012-01-01
Objectives: Examine the impact of providing the newly standardized CCSDS Low Density Parity Check (LDPC) codes to the SCaN return data service on the SCaN SN and DSN ground stations receivers: SN Current Receiver: Integrated Receiver (IR). DSN Current Receiver: Downlink Telemetry and Tracking (DTT) Receiver. Early Commercial-Off-The-Shelf (COTS) prototype of the SN User Service Subsystem Component Replacement (USS CR) Narrow Band Receiver. Motivate discussion of general issues of ground station hardware design to enable simple and cheap modifications for support of future services.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Mechanisms of lectin and antibody-dependent polymorphonuclear leukocyte-mediated cytolysis.
Tsunawaki, S; Ikenami, M; Mizuno, D; Yamazaki, M
1983-04-01
The mechanisms of tumor lysis by polymorphonuclear leukocytes (PMNs) were investigated. In antibody-dependent PMN-mediated cytolysis (ADPC), sensitized tumor cells were specifically lysed via Fc receptors on PMNs. On the other hand, lectin-dependent PMN-mediated cytolysis (LDPC) caused nonspecific lysis of several murine tumors after recognition of carbohydrate moieties on the cell membrane of both PMNs and tumor cells. Both ADPC and LDPC depended on glycolysis, and cytotoxicity was mediated by reactive oxygen species; LDPC was dependent on superoxide and ADPC on the myeloperoxidase system. The participation of reactive oxygen species in PMN cytotoxicity was also demonstrated by pharmacological triggering with phorbol myristate acetate. These results indicate that reactive oxygen species have an important role In tumor killing by PMNs and that ADPC and LDPC have partly different cytolytic processes as well as different recognition steps.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Non-binary or genderqueer genders.
Richards, Christina; Bouman, Walter Pierre; Seal, Leighton; Barker, Meg John; Nieder, Timo O; T'Sjoen, Guy
2016-01-01
Some people have a gender which is neither male nor female and may identify as both male and female at one time, as different genders at different times, as no gender at all, or dispute the very idea of only two genders. The umbrella terms for such genders are 'genderqueer' or 'non-binary' genders. Such gender identities outside of the binary of female and male are increasingly being recognized in legal, medical and psychological systems and diagnostic classifications in line with the emerging presence and advocacy of these groups of people. Population-based studies show a small percentage--but a sizable proportion in terms of raw numbers--of people who identify as non-binary. While such genders have been extant historically and globally, they remain marginalized, and as such--while not being disorders or pathological in themselves--people with such genders remain at risk of victimization and of minority or marginalization stress as a result of discrimination. This paper therefore reviews the limited literature on this field and considers ways in which (mental) health professionals may assist the people with genderqueer and non-binary gender identities and/or expressions they may see in their practice. Treatment options and associated risks are discussed.
NASA Tech Briefs, October 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.
Where Kinsey, Christ, and Tila Tequila meet: discourse and the sexual (non)-binary.
Callis, April S
2014-01-01
Drawing on 80 interviews and 17 months of participant observation in Lexington, Kentucky, this article details how individuals drew on three areas of national and local discourse to conceptualize sexuality. Media, popular science, and religious discourses can be viewed as portraying sexuality bifocally--as both a binary of heterosexual/homosexual and as a non-binary that encompasses fluidity. However, individuals in Lexington drew on each of these areas of discourse differently. Religion was thought to produce a binary vision of sexuality, whereas popular science accounts were understood as both binary and not. The media was understood as portraying non-binary identities that were not viable, thus strengthening the sexual binary. These differing points of view led identities such as bisexual and queer to lack cultural intelligibility.
Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication
NASA Astrophysics Data System (ADS)
Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn
2014-11-01
Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.
25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.
Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S
2013-01-28
We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin
2016-10-01
An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Ansara, Y Gavriel
2015-10-01
Recent Australian legislative and policy changes can benefit people of trans and/or non-binary experience (e.g. men assigned female with stereotypically 'female' bodies, women assigned male with stereotypically 'male' bodies, and people who identify as genderqueer, agender [having no gender], bi-gender [having two genders] or another gender option). These populations often experience cisgenderism, which previous research defined as 'the ideology that invalidates people's own understanding of their genders and bodies'. Some documented forms of cisgenderism include pathologising (treating people's genders and bodies as disordered) and misgendering (disregarding people's own understanding and classifications of their genders and bodies). This system of classifying people's lived experiences of gender and body invalidation is called the cisgenderism framework. Applying the cisgenderism framework in the ageing and aged care sector can enhance service providers' ability to meet the needs of older people of trans and/or non-binary experience. © 2015 AJA Inc.
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
NASA Astrophysics Data System (ADS)
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Gianella, Sara; Sonya Haw, J; Blumenthal, Jill; Sullivan, Brooke; Smith, Davey
2018-04-17
Transgender and gender-nonbinary (trans/GNB) individuals are disproportionally affected by human immunodeficiency virus (HIV), yet they are not adequately represented in HIV research and often underserved in clinical care. By building on community strengths and addressing structural, psychological and biological challenges, we can improve the engagement of trans/GNB people in research and ultimately improve prevention, testing, and care for this population. Here, we review the current state of the science related to HIV for trans/GNB people and discuss next steps to expand research that aims to improve the lives and well-being of trans/GNB persons.
Stolzer, Maureen; Lai, Han; Xu, Minli; Sathaye, Deepa; Vernot, Benjamin; Durand, Dannie
2012-09-15
Gene duplication (D), transfer (T), loss (L) and incomplete lineage sorting (I) are crucial to the evolution of gene families and the emergence of novel functions. The history of these events can be inferred via comparison of gene and species trees, a process called reconciliation, yet current reconciliation algorithms model only a subset of these evolutionary processes. We present an algorithm to reconcile a binary gene tree with a nonbinary species tree under a DTLI parsimony criterion. This is the first reconciliation algorithm to capture all four evolutionary processes driving tree incongruence and the first to reconcile non-binary species trees with a transfer model. Our algorithm infers all optimal solutions and reports complete, temporally feasible event histories, giving the gene and species lineages in which each event occurred. It is fixed-parameter tractable, with polytime complexity when the maximum species outdegree is fixed. Application of our algorithms to prokaryotic and eukaryotic data show that use of an incomplete event model has substantial impact on the events inferred and resulting biological conclusions. Our algorithms have been implemented in Notung, a freely available phylogenetic reconciliation software package, available at http://www.cs.cmu.edu/~durand/Notung. mstolzer@andrew.cmu.edu.
A m-ary linear feedback shift register with binary logic
NASA Technical Reports Server (NTRS)
Perlman, M. (Inventor)
1973-01-01
A family of m-ary linear feedback shift registers with binary logic is disclosed. Each m-ary linear feedback shift register with binary logic generates a binary representation of a nonbinary recurring sequence, producible with a m-ary linear feedback shift register without binary logic in which m is greater than 2. The state table of a m-ary linear feedback shift register without binary logic, utilizing sum modulo m feedback, is first tubulated for a given initial state. The entries in the state table are coded in binary and the binary entries are used to set the initial states of the stages of a plurality of binary shift registers. A single feedback logic unit is employed which provides a separate feedback binary digit to each binary register as a function of the states of corresponding stages of the binary registers.
Characterization of vibrissa germinative cells: transition of cell types.
Osada, A; Kobayashi, K
2001-12-01
Germinative cells, small cell masses attached to the stalks of dermal papillae that are able to differentiate into the hair shaft and inner root sheath, form follicular bulb-like structures when co-cultured with dermal papilla cells. We studied the growth characteristics of germinative cells to determine the cell types in the vibrissa germinative tissue. Germinative tissues, attaching to dermal papillae, were cultured on 3T3 feeder layers. The cultured keratinocytes were harvested and transferred, equally and for two passages, onto lined dermal papilla cells (LDPC) and/or 3T3 feeder layers. The resulting germinative cells were classified into three types in the present experimental condition. Type 1 cells grow very well on either feeder layer, whereas Type 3 cells scarcely grow on either feeder layer. Type 2 cells are very conspicuous and are reversible. They grow well on 3T3 but growth is suppressed on LDPC feeder layers. The Type 2 cells that grow well on 3T3 feeder layers, however, are suppressed when transferred onto LDPC and the Type 2 cells that are suppressed on LDPC begin to grow again on 3T3. The transition of one cell type to another in vitro and the cell types that these germinative cell types correspond to in vivo is discussed. It was concluded that stem cells or their close progenitors reside in the germinative tissues of the vibrissa bulb except at late anagen-early catagen.
Djordjevic, Ivan B
2010-04-12
The Bell states preparation circuit is a basic circuit required in quantum teleportation. We describe how to implement it in all-fiber technology. The basic building blocks for its implementation are directional couplers and highly nonlinear optical fiber (HNLF). Because the quantum information processing is based on delicate superposition states, it is sensitive to quantum errors. In order to enable fault-tolerant quantum computing the use of quantum error correction is unavoidable. We show how to implement in all-fiber technology encoders and decoders for sparse-graph quantum codes, and provide an illustrative example to demonstrate this implementation. We also show that arbitrary set of universal quantum gates can be implemented based on directional couplers and HNLFs.
Serial binary interval ratios improve rhythm reproduction.
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
Serial binary interval ratios improve rhythm reproduction
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception. PMID:23964258
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Direct-detection Free-space Laser Transceiver Test-bed
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli;
2008-01-01
NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.
Non-binary Colour Modulation for Display Device Based on Phase Change Materials.
Ji, Hong-Kai; Tong, Hao; Qian, Hang; Hui, Ya-Juan; Liu, Nian; Yan, Peng; Miao, Xiang-Shui
2016-12-19
A reflective-type display device based on phase change materials is attractive because of its ultrafast response time and high resolution compared with a conventional display device. This paper proposes and demonstrates a unique display device in which multicolour changing can be achieved on a single device by the selective crystallization of double layer phase change materials. The optical contrast is optimized by the availability of a variety of film thicknesses of two phase change layers. The device exhibits a low sensitivity to the angle of incidence, which is important for display and colour consistency. The non-binary colour rendering on a single device is demonstrated for the first time using optical excitation. The device shows the potential for ultrafast display applications.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.
PPLN-waveguide-based polarization entangled QKD simulator
NASA Astrophysics Data System (ADS)
Gariano, John; Djordjevic, Ivan B.
2017-08-01
We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.
Identifying Gender Minority Patients' Health And Health Care Needs In Administrative Claims Data.
Progovac, Ana M; Cook, Benjamin Lê; Mullin, Brian O; McDowell, Alex; Sanchez R, Maria Jose; Wang, Ye; Creedon, Timothy B; Schuster, Mark A
2018-03-01
Health care utilization patterns for gender minority Medicare beneficiaries (those who are transgender or gender nonbinary people) are largely unknown. We identified gender minority beneficiaries using a diagnosis-code algorithm and compared them to a 5 percent random sample of non-gender minority beneficiaries from the period 2009-14 in terms of mental health and chronic diseases, use of preventive and mental health care, hospitalizations, and emergency department (ED) visits. Gender minority beneficiaries experienced more disability and mental illness. When we adjusted for age and mental health, we found that they used more mental health care. And when we adjusted for age and chronic conditions, we found that they were more likely to be hospitalized and to visit the ED. There were several small but significant differences in preventive care use. Findings were similar for disabled and older cohorts. These findings underscore the need to capture gender identity in health data to better address this population's health needs.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
The mental health of Canadian transgender youth compared with the Canadian population
Veale, Jaimie F.; Watson, Ryan J.; Peter, Tracey; Saewyc, Elizabeth M.
2017-01-01
Objectives This study documents the prevalence of mental health concerns among Canadian transgender youth and makes comparisons with cisgender or mostly-cisgender population-based studies. This study also compares gender identity subgroups (transgender girls/women, boys/men, and non-binary) and age subgroups (14–18 year olds and 19–25 year olds) on mental health outcomes. Methods A nonprobability sample of 923 transgender youth from across Canada completed a bilingual online survey. Participants were recruited through community organizations, healthcare settings, social media, and the researchers’ networks. Mental health measures were drawn from the British Columbia Adolescent Health Survey and the Canadian Community Health Survey. Results Transgender youth had a higher risk of reporting psychological distress, self-harm, major depressive episode, suicidal ideation, and suicide attempts. Risk ratios ranged from 3.8 to 16.1. Transgender boys/men and non-binary youth were most likely to report self-harm and non-binary youth also reported lower overall mental health. Rates of self-harm and suicide were lower in the 19–25 age group than the 14–18 age group, but reported overall mental health was the same across these age groups. Conclusions Although a notable minority of transgender youth reported good mental health, this study shows the mental health disparities faced by transgender youth in Canada are considerable. Policy Implications These findings underscore the need for policies and laws protecting transgender people from discrimination, training for transgender competency for mental healthcare providers, providers, and further development of transgender-specific interventions to promote positive mental health and reduce mental health problems among transgender youth. PMID:28007056
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
MAJIQ-SPEL: Web-tool to interrogate classical and complex splicing variations from RNA-Seq data.
Green, Christopher J; Gazzara, Matthew R; Barash, Yoseph
2017-09-11
Analysis of RNA sequencing (RNA-Seq) data have highlighted the fact that most genes undergo alternative splicing (AS) and that these patterns are tightly regulated. Many of these events are complex, resulting in numerous possible isoforms that quickly become difficult to visualize, interpret, and experimentally validate. To address these challenges we developed MAJIQ-SPEL, a web-tool that takes as input local splicing variations (LSVs) quantified from RNA-Seq data and provides users with visualization and quantification of gene isoforms associated with those. Importantly, MAJIQ-SPEL is able to handle both classical (binary) and complex, non-binary, splicing variations. Using a matching primer design algorithm it also suggests users possible primers for experimental validation by RT-PCR and displays those, along with the matching protein domains affected by the LSV, on UCSC Genome Browser for further downstream analysis. Program and code will be available at http://majiq.biociphers.org/majiq-spel. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Health, disability and quality of life among trans people in Sweden-a web-based survey.
Zeluf, Galit; Dhejne, Cecilia; Orre, Carolina; Nilunger Mannheimer, Louise; Deogan, Charlotte; Höijer, Jonas; Ekéus Thorson, Anna
2016-08-30
Swedish research concerning the general health of trans people is scarce. Despite the diversity of the group, most Swedish research has focused on gender dysphoric people seeking medical help for their gender incongruence, or on outcomes after medical gender-confirming interventions. This paper examines self-rated health, self-reported disability and quality of life among a diverse group of trans people including trans feminine, trans masculine, and gender nonbinary people (identifying with a gender in between male of female, or identify with neither of these genders) as well as people self-identifying as transvestites. Participants were self-selected anonymously to a web-based survey conducted in 2014. Univariable and multivariable regression analyses were performed. Three backward selection regression models were conducted in order to identify significant variables for the outcomes self-rated health, self-reported disability and quality of life. Study participants included 796 individuals, between 15 and 94 years of age who live in Sweden. Respondents represented a heterogeneous group with regards to trans experience, with the majority being gender nonbinary (44 %), followed by trans masculine (24 %), trans feminine (19 %) and transvestites (14 %). A fifth of the respondents reported poor self-rated health, 53 % reported a disability and 44 % reported quality of life scores below the median cut-off value of 6 (out of 10). Nonbinary gender identity (adjusted Odds Ratio (aOR) = 2.19; 95 % CI: 1.24, 3.84), negative health care experiences (aOR = 1.92; 95 % CI: 1.26, 2.91) and not accessing legal gender recognition (aOR = 3.06; 95 % CI: 1.64, 5.72) were significant predictors for self-rated health. Being gender nonbinary (aOR = 2.18; 95 % CI: 1.35, 3.54) and history of negative health care experiences (aOR = 2.33; 95 % CI: 1.54, 3.52) were, in addition, associated with self-reported disability. Lastly, not accessing legal gender recognition (aOR = 0.32; 95 % CI: 0.17, 0.61) and history of negative health care experiences (aOR = 0.56; 95 % CI: 0.36, 0.88) were associated with lower quality of life. The results of this study demonstrate that the general health of trans respondents is related to vulnerabilities that are unique for trans people in addition to other well-known health determinants.
Bai, Benfeng; Laukkanen, Janne; Kuittinen, Markku; Siitonen, Samuli
2010-10-01
We propose and investigate the use of slanted surface-relief gratings with nonbinary profiles as high-efficiency broadband couplers for light guides. First, a Chandezon-method-based rigorous numerical formulation is presented for modeling the slanted gratings with overhanging profiles. Then, two typical types of slanted grating couplers--a sinusoidal one and a trapezoidal one--are studied and optimized numerically, both exhibiting a high coupling efficiency of over 50% over the full band of white LED under the normal illumination of unpolarized light. Reasonable structural parameters with nice tolerance have been obtained for the optimized designs. It is found that the performance of the couplers depends little on the grating profile shape, but primarily on the grating period and the slant angle of the ridge. The underlying mechanism is analyzed by the equivalence rules of gratings, which provide useful guidelines for the design and fabrication of the couplers. Preliminary investigation has been performed on the fabrication and replication of the slanted overhanging grating couplers, which shows the feasibility of fabrication with mature microfabrication techniques and the perspective for mass production.
Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation.
Djordjevic, Ivan B
2011-07-18
In order to achieve multi-gigabit transmission (projected for 2020) for the use in interplanetary communications, the usage of large number of time slots in pulse-position modulation (PPM), typically used in deep-space applications, is needed, which imposes stringent requirements on system design and implementation. As an alternative satisfying high-bandwidth demands of future interplanetary communications, while keeping the system cost and power consumption reasonably low, in this paper, we describe the use of orbital angular momentum (OAM) as an additional degree of freedom. The OAM is associated with azimuthal phase of the complex electric field. Because OAM eigenstates are orthogonal the can be used as basis functions for N-dimensional signaling. The OAM modulation and multiplexing can, therefore, be used, in combination with other degrees of freedom, to solve the high-bandwidth requirements of future deep-space and near-Earth optical communications. The main challenge for OAM deep-space communication represents the link between a spacecraft probe and the Earth station because in the presence of atmospheric turbulence the orthogonality between OAM states is no longer preserved. We will show that in combination with LDPC codes, the OAM-based modulation schemes can operate even under strong atmospheric turbulence regime. In addition, the spectral efficiency of proposed scheme is N2/log2N times better than that of PPM.
ERIC Educational Resources Information Center
Lewis, Katherine
2017-01-01
The primary purpose of this study is to investigate and understand gender diverse individuals' retrospective accounts of their experiences of school and to interpret these experiences under the influence of deconstruction. A second purpose is to use these experiences to inform a model of gender-inclusive education. In this qualitative study,…
The Reliability and Validity of the Anima-Animus Continuum Scale
ERIC Educational Resources Information Center
Chow, Peter; Jeffery, Mikayla
2018-01-01
The conventional dichotomization of sex and gender into male and female has been found to be far from adequate in representing the amount of variance that exists within each biological sex. Expressions of gender and sexuality have gone beyond our imagination as more and more countries are beginning to allow a nonbinary/intersexed option on their…
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
ERIC Educational Resources Information Center
Hoyer, Grace
2013-01-01
The purpose of this study was to explore transcultural self-efficacy in nursing education administrators/faculty and to gain understanding of confidence related to lesbian, gay, bisexual, transgender (LGBT) issues. The 83-item Transcultural Self-Efficacy Tool (TSET) with an additional 11 LGBT questions was administered to 535 nursing education…
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform
NASA Astrophysics Data System (ADS)
Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin
2013-12-01
Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.
Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)
NASA Technical Reports Server (NTRS)
Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.
2012-01-01
The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
ERIC Educational Resources Information Center
Seelman, Kristie L.
2013-01-01
For years, transgender activists and their allies have spoken out about the oppression that transgender and gender non-conforming people experience in relation to societal systems and institutions, due to policies and practices that do not acknowledge non-binary experiences of gender, that do not recognize that one's gender may change over time or…
Low-dose cardio-respiratory phase-correlated cone-beam micro-CT of small animals.
Sawall, Stefan; Bergner, Frank; Lapp, Robert; Mronz, Markus; Karolczak, Marek; Hess, Andreas; Kachelriess, Marc
2011-03-01
Micro-CT imaging of animal hearts typically requires a double gating procedure because scans during a breath-hold are not possible due to the long scan times and the high respiratory rates, Simultaneous respiratory and cardiac gating can either be done prospectively or retrospectively. True five-dimensional information can be either retrieved with retrospective gating or with prospective gating if several prospective gates are acquired. In any case, the amount of information available to reconstruct one volume for a given respiratory and cardiac phase is orders of magnitud lower than the total amount of information acquired. For example, the reconstruction of a volume from a 10% wide respiratory and a 20% wide cardiac window uses only 2% of the data acquired. Achieving a similar image quality as a nongated scan would therefore require to increase the amount of data and thereby the dose to the animal by up to a factor of 50. To achieve the goal of low-dose phase-correlated (LDPC) imaging, the authors propose to use a highly efficient combination of slightly modified existing algorithms. In particular, the authors developed a variant of the McKinnon-Bates image reconstruction algorithm and combined it with bilateral filtering in up to five dimensions to significantly reduce image noise without impairing spatial or temporal resolution. The preliminary results indicate that the proposed LDPC reconstruction method typically reduces image noise by a factor of up to 6 (e.g., from 170 to 30 HU), while the dose values lie in a range from 60 to 500 mGy. Compared to other publications that apply 250-1800 mGy for the same task [C. T. Badea et al., "4D micro-CT of the mouse heart," Mol. Imaging 4(2), 110-116 (2005); M. Drangova et al., "Fast retrospectively gated quantitative four-dimensional (4D) cardiac micro computed tomography imaging of free-breathing mice," Invest. Radiol. 42(2), 85-94 (2007); S. H. Bartling et al., "Retrospective motion gating in small animal CT of mice and rats," Invest. Radiol. 42(10), 704-714 (2007)], the authors' LDPC approach therefore achieves a more than tenfold dose usage improvement. The LDPC reconstruction method improves phase-correlated imaging from highly undersampled data. Artifacts caused by sparse angular sampling are removed and the image noise is decreased, while spatial and temporal resolution are preserved. Thus, the administered dose per animal can be decreased allowing for long-term studies with reduced metabolic inference.
NASA Astrophysics Data System (ADS)
Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.
2015-08-01
The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.
DVB-S2 Experiment over NASA's Space Network
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.
2017-01-01
The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.
ERIC Educational Resources Information Center
Nicolazzo, Z.
2016-01-01
Being labeled as "abnormal, or deviant, or not being one of the rest of us" has real effects for one's life chances. Trans* people are one such group who have continually been codified as abnormal, abject, weird, deceptive, and social pariahs. The purpose of the following study was to explore how the concepts of passing, realness, and…
Castro-Chavez, Fernando
2012-01-01
Background Three binary representations of the genetic code according to the ancient I Ching of Fu-Xi will be presented, depending on their defragging capabilities by pairing based on three biochemical properties of the nucleic acids: H-bonds, Purine/Pyrimidine rings, and the Keto-enol/Amino-imino tautomerism, yielding the last pair a 32/32 single-strand self-annealed genetic code and I Ching tables. Methods Our working tool is the ancient binary I Ching's resulting genetic code chromosomes defragged by vertical and by horizontal pairing, reverse engineered into non-binaries of 2D rotating 4×4×4 circles and 8×8 squares and into one 3D 100% symmetrical 16×4 tetrahedron coupled to a functional tetrahedron with apical signaling and central hydrophobicity (codon formula: 4[1(1)+1(3)+1(4)+4(2)]; 5:5, 6:6 in man) forming a stella octangula, and compared to Nirenberg's 16×4 codon table (1965) pairing the first two nucleotides of the 64 codons in axis y. Results One horizontal and one vertical defragging had the start Met at the center. Two, both horizontal and vertical pairings produced two pairs of 2×8×4 genetic code chromosomes naturally arranged (M and I), rearranged by semi-introversion of central purines or pyrimidines (M' and I') and by clustering hydrophobic amino acids; their quasi-identity was disrupted by amino acids with odd codons (Met and Tyr pairing to Ile and TGA Stop); in all instances, the 64-grid 90° rotational ability was restored. Conclusions We defragged three I Ching representations of the genetic code while emphasizing Nirenberg's historical finding. The synthetic genetic code chromosomes obtained reflect the protective strategy of enzymes with a similar function, having both humans and mammals a biased G-C dominance of three H-bonds in the third nucleotide of their most used codons per amino acid, as seen in one chromosome of the i, M and M' genetic codes, while a two H-bond A-T dominance was found in their complementary chromosome, as seen in invertebrates and plants. The reverse engineering of chromosome I' into 2D rotating circles and squares was undertaken, yielding a 100% symmetrical 3D geometry which was coupled to a previously obtained genetic code tetrahedron in order to differentiate the start methionine from the methionine that is acting as a codifying non-start codon. PMID:23431415
ZED- A LINE EDITOR FOR THE DEC VAX
NASA Technical Reports Server (NTRS)
Scott, P. J.
1994-01-01
The ZED editor for the DEC VAX is a simple, yet powerful line editor for text, program source code, and non-binary data. Line editors can be superior to screen editors in some cases, such as executing complex multiple or conditional commands, or editing via slow modem lines. ZED excels in the area of text processing by using procedure files. For example, such procedures can reformat a file of addresses or remove all comment lines from a FORTRAN program. In addition to command files, ZED also features versatile search qualifiers, global changes, conditionals, on-line help, hexadecimal mode, space compression, looping, logical combinations of search strings, journaling, visible control characters, and automatic detabbing. The ZED editor was originally developed at Cambridge University in London and has been continuously enhanced since 1976. Users of the Cambridge implementation have devised such elaborate ZED procedures as chess games, calculators, and programs for evaluating Pi. This implementation of ZED strives to maintain the characteristics of the Cambridge editor. A complete ZED manual is included on the tape. ZED is written entirely in C for either batch or interactive execution on the DEC VAX under VMS 4.X and requires 80,896 bytes of memory. This program was released in 1988 and updated in 1989.
High-Speed Large-Alphabet Quantum Key Distribution Using Photonic Integrated Circuits
2014-01-28
polarizing beam splitter, TDC: time-to-digital converter. Extra&loss& photon/bin frame size QSER secure bpp ECC secure&key&rate& none& 0.0031 64 14...to-digital converter. photon/frame frame size QSER secure bpp ECC secure&key& rate& 1.3 16 9.5 % 2.9 layered LDPC 7.3&Mbps& Figure 24: Operating
Nonlinear detection for a high rate extended binary phase shift keying system.
Chen, Xian-Qing; Wu, Le-Nan
2013-03-28
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.
Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System
Chen, Xian-Qing; Wu, Le-Nan
2013-01-01
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034
NASA Astrophysics Data System (ADS)
Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe
2015-05-01
Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.
Files synchronization from a large number of insertions and deletions
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Kumari, Savera
2017-11-01
Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
NASA Tech Briefs, September 2009
NASA Technical Reports Server (NTRS)
2009-01-01
opics covered include: Filtering Water by Use of Ultrasonically Vibrated Nanotubes; Computer Code for Nanostructure Simulation; Functionalizing CNTs for Making Epoxy/CNT Composites; Improvements in Production of Single-Walled Carbon Nanotubes; Progress Toward Sequestering Carbon Nanotubes in PmPV; Two-Stage Variable Sample-Rate Conversion System; Estimating Transmitted-Signal Phase Variations for Uplink Array Antennas; Board Saver for Use with Developmental FPGAs; Circuit for Driving Piezoelectric Transducers; Digital Synchronizer without Metastability; Compact, Low-Overhead, MIL-STD-1553B Controller; Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM; Differential InP HEMT MMIC Amplifiers Embedded in Waveguides; Improved Aerogel Vacuum Thermal Insulation; Fluoroester Co-Solvents for Low-Temperature Li+ Cells; Using Volcanic Ash to Remove Dissolved Uranium and Lead; High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell; Silicon Wafer-Scale Substrate for Microshutters and Detector Arrays; Micro-Horn Arrays for Ultrasonic Impedance Matching; Improved Controller for a Three-Axis Piezoelectric Stage; Nano-Pervaporation Membrane with Heat Exchanger Generates Medical-Grade Water; Micro-Organ Devices; Nonlinear Thermal Compensators for WGM Resonators; Dynamic Self-Locking of an OEO Containing a VCSEL; Internal Water Vapor Photoacoustic Calibration; Mid-Infrared Reflectance Imaging of Thermal-Barrier Coatings; Improving the Visible and Infrared Contrast Ratio of Microshutter Arrays; Improved Scanners for Microscopic Hyperspectral Imaging; Rate-Compatible LDPC Codes with Linear Minimum Distance; PrimeSupplier Cross-Program Impact Analysis and Supplier Stability Indicator Simulation Model; Integrated Planning for Telepresence With Time Delays; Minimizing Input-to-Output Latency in Virtual Environment; Battery Cell Voltage Sensing and Balancing Using Addressable Transformers; Gaussian and Lognormal Models of Hurricane Gust Factors; Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft; Integrated Modeling of Spacecraft Touch-and-Go Sampling; Spacecraft Station-Keeping Trajectory and Mission Design Tools; Efficient Model-Based Diagnosis Engine; and DSN Simulator.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Suicide risk in trans populations: An application of minority stress theory.
Tebbe, Elliot A; Moradi, Bonnie
2016-10-01
Drawing on minority stress theory, the present study tested the relations of minority stressors (i.e., experiences of prejudice and discrimination, internalized antitrans attitudes, fear of antitrans stigma), social support (i.e., friend, family, and significant other support), and substance use (i.e., drug and alcohol use) with depression and suicide risk in a sample of trans individuals. Depression was examined as a mediator of the relations of minority stressors and social support with suicide risk; drug and alcohol use were examined as direct correlates of suicide risk. Participants were 335 trans-identified individuals, diverse in gender identities (e.g., trans men, trans women, nonbinary gender identities). They were recruited using online social networks and they completed the study survey online via Qualtrics. Structural equation modeling was used to test hypothesized relations. Depression fully mediated the relations of perceived experiences of discrimination, fear of antitrans stigma, and friend support with suicide risk, and partially mediated the relation of internalized antitrans attitudes with suicide. Drug use was positively associated with suicide risk, whereas alcohol use was not linked with suicide risk. Exploratory comparisons across gender subgroups suggested that the pattern of relations among study variables was consistent across trans men, trans women, and individuals with nonbinary gender identities. These findings point to minority stressors, friend support, and drug use as potentially fruitful targets of prevention and intervention efforts to reduce depression and suicide risk in trans populations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Genders and Individual Treatment Progress in (Non-)Binary Trans Individuals.
Koehler, Andreas; Eyssel, Jana; Nieder, Timo O
2018-01-01
Health care for transgender and transsexual (ie, trans) individuals has long been based on a binary understanding of gender (ie, feminine vs masculine). However, the existence of non-binary or genderqueer (NBGQ) genders is increasingly recognized by academic and/or health care professionals. To gain insight into the individual health care experiences and needs of binary and NBGQ individuals to improve their health care outcomes and experience. Data were collected using an online survey study on experiences with trans health care. The non-clinical sample consisted of 415 trans individuals. An individual treatment progress score was calculated to report and compare participants' individual progress toward treatment completion and consider the individual treatment needs and definitions of completed treatment (ie, amount and types of different treatments needed to complete one's medical transition). Main outcome measures were (i) general and trans-related sociodemographic data and (ii) received and planned treatments. Participants reported binary (81.7%) and different NBGQ (18.3%) genders. The 2 groups differed significantly in basic demographic data (eg, mean age; P < .05). NBGQ participants reported significantly fewer received treatments compared with binary participants. For planned treatments, binary participants reported more treatments related to primary sex characteristics only. Binary participants required more treatments for a completed treatment than NBGQ participants (6.0 vs 4.0). There were no differences with regard to individual treatment progress score. Because traditional binary-focused treatment practice could have hindered NBGQ individuals from accessing trans health care or sufficiently articulating their needs, health care professionals are encouraged to provide a holistic and individual treatment approach and acknowledge genders outside the gender binary to address their needs appropriately. Because the study was made inclusive for non-patients and individuals who decided against trans health care, bias from a participant-patient double role was prevented, which is the reason the results are likely to have a higher level of validity than a clinical sample. However, because of the anonymity of an online survey, it remains unclear whether NBGQ individuals live according to their gender identity in their everyday life. The study highlights the broad spectrum of genders in trans-individuals and associated health care needs and provides a novel approach to measure individual treatment progress in trans individuals. Koehler A, Eyssel J, Nieder TO. Genders and Individual Treatment Progress in (Non-)Binary Trans Individuals. J Sex Med 2018;15:102-113. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Bacca, Cristina L.; Cochran, Bryan N.
2014-01-01
Background Lesbian, gay, bisexual, and transgender individuals are at higher risk for substance use and substance use disorders than heterosexual individuals and are more likely to seek substance use treatment, yet sexual orientation and gender identity are frequently not reported in the research literature. The purpose of this study was to identify if sexual orientation and gender identity are being reported in the recent substance use literature, and if this has changed over time. Method The PsycINFO and PubMed databases were searched for articles released in 2007 and 2012 using the term “substance abuse” and 200 articles were randomly selected from each time period and database. Articles were coded for the presence or absence of sexual orientation and gender identity information. Results Participants’ sexual orientation was reported in 3.0% and 4.9% of the 2007 and 2.3% and 6.5% of the 2012 sample, in PsycINFO and PubMed sample articles, respectively, while non-binary gender identity was reported in 0% and 1.0% of the 2007 sample and 2.3% and 1.9% of the 2012 PsycINFO and PubMed sample articles. There were no differences in rates of reporting over time. Conclusions Sexual orientation and gender identity are rarely reported in the substance abuse literature, and there has not been a change in reporting practices between 2007 and 2012. Recommendations for future investigators in reporting sexual orientation and gender identity are included. PMID:25496705
Flentje, Annesa; Bacca, Cristina L; Cochran, Bryan N
2015-02-01
Lesbian, gay, bisexual, and transgender individuals are at higher risk for substance use and substance use disorders than heterosexual individuals and are more likely to seek substance use treatment, yet sexual orientation and gender identity are frequently not reported in the research literature. The purpose of this study was to identify if sexual orientation and gender identity are being reported in the recent substance use literature, and if this has changed over time. The PsycINFO and PubMed databases were searched for articles released in 2007 and 2012 using the term "substance abuse" and 200 articles were randomly selected from each time period and database. Articles were coded for the presence or absence of sexual orientation and gender identity information. Participants' sexual orientation was reported in 3.0% and 4.9% of the 2007 and 2.3% and 6.5% of the 2012 sample, in PsycINFO and PubMed sample articles, respectively, while non-binary gender identity was reported in 0% and 1.0% of the 2007 sample and 2.3% and 1.9% of the 2012 PsycINFO and PubMed sample articles. There were no differences in rates of reporting over time. Sexual orientation and gender identity are rarely reported in the substance abuse literature, and there has not been a change in reporting practices between 2007 and 2012. Recommendations for future investigators in reporting sexual orientation and gender identity are included. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sexual Violence on Campus: Differences Across Gender and Sexual Minority Status.
Martin-Storey, Alexa; Paquette, Geneviève; Bergeron, Manon; Dion, Jacinthe; Daigneault, Isabelle; Hébert, Martine; Ricci, Sandrine
2018-06-01
Sexual violence is a pervasive problem on university campuses. Although previous work has documented greater vulnerability for sexual violence among sexual and gender minority students, little is known about contextual variation in vulnerability to this kind of violence. The goals of the current study were (1) to identify vulnerability among sexual and gender minority students with regard to sexual violence, and (2) to explore if the context of this violence differs across sexual and gender minority status. Undergraduate students (ages 18-24) from six francophone universities in Quebec, Canada (N = 4,264) completed online questionnaires regarding their experience of sexual violence, as well as the context of these acts (e.g., the gender of the perpetrator, the status of the perpetrator, and the location of the violence). They also provided information regarding their sexual and gender minority status. Binary logistic regressions were conducted to assess for variation in experiencing sexual violence across sexual and gender minority status. Transgender/nonbinary students generally reported higher levels of sexual violence than their cisgender peers, while variation occurred with regard to vulnerability across sexual identity subgroups. Few differences in context were observed across sexual minority identity. Transgender/nonbinary students were significantly more likely to report sexual violence in athletic contexts and during volunteering activities compared to their cisgender peers. Findings highlight the higher levels of vulnerability for sexual violence among gender minority and some sexual minority university students. They also point to the contexts in which such violence occurs, suggesting specific strategies for prevention. Copyright © 2018 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
On Frequency Offset Estimation Using the iNET Preamble in Frequency Selective Fading Channels
2014-03-01
ASM fields; (bottom) the relationship between the indexes of the received samples r(n), the signal samples s(n), the preamble samples p (n) and the short...frequency offset estimators for SOQPSK-TG equipped with the iNET preamble and operating in ISI channels. Four of the five estimators exam - ined here are...sync marker ( ASM ), and data bits (an LDPC codeword). The availability of a preamble introduces the possibility of data-aided synchro- nization in
Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel
NASA Technical Reports Server (NTRS)
Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John
2016-01-01
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.
Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.
2016-01-01
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.
Partial Treatment Requests and Underlying Motives of Applicants for Gender Affirming Interventions.
Beek, Titia F; Kreukels, Baudewijntje P C; Cohen-Kettenis, Peggy T; Steensma, Thomas D
2015-11-01
Historically, only individuals with a cross-gender identity who wanted to receive a full treatment, were eligible for "complete sex reassignment" consisting of feminizing/masculinizing hormone treatment and several surgical interventions including genital surgery (full treatment). Currently, it is unclear what motives underlie a request for hormones only or surgery only or a combination of hormones and surgery (e.g., a mastectomy), but no genital surgery (partial treatment). The aims of this study were (i) to describe treatment requests of applicants at a specialized gender identity clinic in the Netherlands; and (ii) to explore the motives underlying a partial treatment request, including the role of (non-binary) gender identity. Information was collected on all 386 adults who applied for treatment at the Center of Expertise on Gender Dysphoria of the VU University Medical Center in Amsterdam, the Netherlands, in the year 2013. Treatment requests were available for 360 individuals: 233 natal men (64.7%) and 127 natal women (35.3%). Treatment requests were systematically collected during assessment. Individuals were classified as either desiring a full or partial treatment. The motives behind a partial treatment request were collected and categorized as well. The majority of applicants at our gender identity clinic requested full treatment. Among those who requested partial treatment, the most reported underlying motive was surgical risks/outcomes. Only a small number of applicants requested partial treatment to bring their body into alignment with their non-binary gender identity. It becomes clear that partial treatment is requested by a substantial number of applicants. This emphasizes the need for gender identity clinics to provide information about the medical possibilities and limitations, and careful introduction and evaluation of non-standard treatment options. © 2015 International Society for Sexual Medicine.
NASA Astrophysics Data System (ADS)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Deutsch, Madeline B
2016-06-01
An accurate estimate of the number of transgender and gender nonconforming people is essential to inform policy and funding priorities and decisions. Historical reports of population sizes of 1 in 4000 to 1 in 50,000 have been based on clinical populations and likely underestimate the size of the transgender population. More recent population-based studies have found a 10- to 100-fold increase in population size. Studies that estimate population size should be population based, employ the two-step method to allow for collection of both gender identity and sex assigned at birth, and include measures to capture the range of transgender people with nonbinary gender identities.
NASA Astrophysics Data System (ADS)
Perez, Tracie Renea Conn
Over the past 15 years, there has been a growing interest in femtosatellites, a class of tiny satellites having mass less than 100 grams. Research groups from Peru, Spain, England, Canada, and the United States have proposed femtosat designs and novel mission concepts for them. In fact, Peru made history in 2013 by releasing the first - and still only - femtosat tracked from LEO. However, femtosatellite applications in interplanetary missions have yet to be explored in detail. An interesting operations concept would be for a space probe to release numerous femtosatellites into orbit around a planetary object of interest, thereby augmenting the overall data collection capability of the mission. A planetary probe releasing hundreds of femtosats could complete an in-situ, simultaneous 3D mapping of a physical property of interest, achieving scientific investigations not possible for one probe operating alone. To study the technical challenges associated with such a mission, a conceptual mission design is proposed where femtosats are deployed from a host satellite orbiting Titan. The conceptual mission objective is presented: to study Titan's dynamic atmosphere. Then, the design challenges are addressed in turn. First, any science payload measurements that the femtosats provide are only useful if their corresponding locations can be determined. Specifically, what's required is a method of position determination for femtosatellites operating beyond Medium Earth Orbit and therefore beyond the help of GPS. A technique is presented which applies Kalman filter techniques to Doppler shift measurements, allowing for orbit determination of the femtosats. Several case studies are presented demonstrating the usefulness of this approach. Second, due to the inherit power and computational limitations in a femtosatellite design, establishing a radio link between each chipsat and the mothersat will be difficult. To provide a mathematical gain, a particular form of forward error correction (FEC) method called low-density parity-check (LDPC) codes is recommended. A specific low-complexity encoder, and accompanying decoder, have been implemented in the open-source software radio library, GNU Radio. Simulation results demonstrating bit error rate (BER) improvement are presented. Hardware for implementing the LDPC methods in a benchtop test are described and future work on this topic is suggested. Third, the power and spatial constraints of femtosatellite designs likely restrict the payload to one or two sensors. Therefore, it is desired to extract as much useful scientific data as possible from secondary sources, such as radiometric data. Estimating the atmospheric density model from different measurement sources is simulated; results are presented. The overall goal for this effort is to advance the field of miniature spacecraft-based technology and to highlight the advantages of using femtosatellites in future planetary exploration missions. By addressing several subsystem design challenges in this context, such a femtosat mission concept is one step closer to being feasible.
Gender as a historical kind: a tale of two genders?
Godman, Marion
2018-01-01
Is there anything that members of each binary category of gender have in common? Even many non-essentialists find the lack of unity within a gender worrying as it undermines the basis for a common political agenda for women. One promising proposal for achieving unity is by means of a shared historical lineage of cultural reproduction with past binary models of gender (e.g. Bach in Ethics 122:231-272, 2012). I demonstrate how such an account is likely to take on board different binary and also non-binary systems of gender. This implies that all individuals construed as members of the category, "women" are in fact not members of the same historical kind after all! I then consider different possible means of modifying the account but conclude negatively: the problem runs deeper than has been appreciated thus far.
YouTube as a Site of Counternarratives to Transnormativity.
Miller, Jordan F
2018-06-12
In this article, I discuss how transnormativity can be disrupted by not exaggerating the physical aspects of medical transition and by engaging in conversations around consequential sources of tension within gender and sexual minority communities, namely linguistic understandings of trans and gendered racism within white, gay cisgender communities toward trans communities of color. This study is based on qualitative interviews with six trans YouTubers; these interviews were complemented by analyses of these YouTubers' videos and select comments on these videos. With this exploratory study, I aim to provide nuance to existing lesbian, gay, bisexual and trans (LGBT) YouTube literature through highlighting the experiences of non-binary trans vloggers and trans vloggers of color, regardless of medical transition status, as well as contribute a transfeminist analysis to ongoing conversations around transnormativity within sociology, cultural and media studies, and queer and trans theory.
Rotational properties of the binary and non-binary populations in the trans-Neptunian belt
NASA Astrophysics Data System (ADS)
Thirouin, A.; Noll, K. S.; Ortiz, J. L.; Morales, N.
2014-09-01
We present results for the short-term variability of binary trans-Neptunian objects (BTNOs). We performed CCD photometric observations using the 3.58 m Telescopio Nazionale Galileo (TNG), the 1.5 m Sierra Nevada Observatory (OSN) telescope, and the 1.23 m Centro Astronómico Hispano Alemán (CAHA) telescope at Calar Alto Observatory. We present results based on five years of observations and report the short-term variability of six BTNOs. Our sample contains three classical objects: (174567) 2003 MW12, or Varda, (120347) 2004 SB60, or Salacia, and 2002 VT130; one detached disk object: (229762) 2007 UK126; and two resonant objects: (341520) 2007 TY430 and (38628) 2000 EB173, or Huya. For each target, possible rotational periods and/or photometric amplitudes are reported. We also derived some physical properties from their light curves, such as density, primary and secondary sizes, and albedo. We compiled and analyzed a vast light curve database for TNOs including centaurs to determine the light-curve amplitude and spin frequency distributions for the binary and non-binary populations. The mean rotational periods, from the Maxwellian fits to the frequency distributions, are 8.63 ± 0.52 h for the entire sample, 8.37 ± 0.58 h for the sample without the binary population, and 10.11 ± 1.19 h for the binary population alone. Because the centaurs are collisionally more evolved, their rotational periods might not be so primordial. We computed a mean rotational period, from the Maxwellian fit, of 8.86 ± 0.58 h for the sample without the centaur population, and of 8.64 ± 0.67 h considering a sample without the binary and the centaur populations. According to this analysis, regular TNOs spin faster than binaries, which is compatible with the tidal interaction of the binaries. Finally, we examined possible formation models for several systems studied in this work and by our team in previous papers. Appendix A is available in electronic form at http://www.aanda.orgFull Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/569/A3
NASA Tech Briefs, October 2008
NASA Technical Reports Server (NTRS)
2008-01-01
Topics covered include: Control Architecture for Robotic Agent Command and Sensing; Algorithm for Wavefront Sensing Using an Extended Scene; CO2 Sensors Based on Nanocrystalline SnO2 Doped with CuO; Improved Airborne System for Sensing Wildfires; VHF Wide-Band, Dual-Polarization Microstrip-Patch Antenna; Onboard Data Processor for Change-Detection Radar Imaging; Using LDPC Code Constraints to Aid Recovery of Symbol Timing; System for Measuring Flexing of a Large Spaceborne Structure; Integrated Formation Optical Communication and Estimation System; Making Superconducting Welds between Superconducting Wires; Method for Thermal Spraying of Coatings Using Resonant-Pulsed Combustion; Coating Reduces Ice Adhesion; Hybrid Multifoil Aerogel Thermal Insulation; SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software; Mars Image Collection Mosaic Builder; Providing Internet Access to High-Resolution Mars Images; Providing Internet Access to High-Resolution Lunar Images; Expressions Module for the Satellite Orbit Analysis Program Virtual Satellite; Small-Body Extensions for the Satellite Orbit Analysis Program (SOAP); Scripting Module for the Satellite Orbit Analysis Program (SOAP); XML-Based SHINE Knowledge Base Interchange Language; Core Technical Capability Laboratory Management System; MRO SOW Daily Script; Tool for Inspecting Alignment of Twinaxial Connectors; An ATP System for Deep-Space Optical Communication; Polar Traverse Rover Instrument; Expert System Control of Plant Growth in an Enclosed Space; Detecting Phycocyanin-Pigmented Microbes in Reflected Light; DMAC and NMP as Electrolyte Additives for Li-Ion Cells; Mass Spectrometer Containing Multiple Fixed Collectors; Waveguide Harmonic Generator for the SIM; Whispering Gallery Mode Resonator with Orthogonally Reconfigurable Filter Function; Stable Calibration of Raman Lidar Water-Vapor Measurements; Bimaterial Thermal Compensators for WGM Resonators; Root Source Analysis/ValuStream[Trade Mark] - A Methodology for Identifying and Managing Risks; Ensemble: an Architecture for Mission-Operations Software; Object Recognition Using Feature-and Color-Based Methods; On-Orbit Multi-Field Wavefront Control with a Kalman Filter; and The Interplanetary Overlay Networking Protocol Accelerator.
NASA Tech Briefs, December 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Topics include: A Deep Space Network Portable Radio Science Receiver; Detecting Phase Boundaries in Hard-Sphere Suspensions; Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery; Very-Long-Distance Remote Hearing and Vibrometry; Using GPS to Detect Imminent Tsunamis; Stream Flow Prediction by Remote Sensing and Genetic Programming; Pilotless Frame Synchronization Using LDPC Code Constraints; Radiometer on a Chip; Measuring Luminescence Lifetime With Help of a DSP; Modulation Based on Probability Density Functions; Ku Telemetry Modulator for Suborbital Vehicles; Photonic Links for High-Performance Arraying of Antennas; Reconfigurable, Bi-Directional Flexfet Level Shifter for Low-Power, Rad-Hard Integration; Hardware-Efficient Monitoring of I/O Signals; Video System for Viewing From a Remote or Windowless Cockpit; Spacesuit Data Display and Management System; IEEE 1394 Hub With Fault Containment; Compact, Miniature MMIC Receiver Modules for an MMIC Array Spectrograph; Waveguide Transition for Submillimeter-Wave MMICs; Magnetic-Field-Tunable Superconducting Rectifier; Bonded Invar Clip Removal Using Foil Heaters; Fabricating Radial Groove Gratings Using Projection Photolithography; Gratings Fabricated on Flat Surfaces and Reproduced on Non-Flat Substrates; Method for Measuring the Volume-Scattering Function of Water; Method of Heating a Foam-Based Catalyst Bed; Small Deflection Energy Analyzer for Energy and Angular Distributions; Polymeric Bladder for Storing Liquid Oxygen; Pyrotechnic Simulator/Stray-Voltage Detector; Inventions Utilizing Microfluidics and Colloidal Particles; RuO2 Thermometer for Ultra-Low Temperatures; Ultra-Compact, High-Resolution LADAR System for 3D Imaging; Dual-Channel Multi-Purpose Telescope; Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting; CMOS Camera Array With Onboard Memory; Quickly Approximating the Distance Between Two Objects; Processing Images of Craters for Spacecraft Navigation; Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System; Rover Slip Validation and Prediction Algorithm; Safety and Quality Training Simulator; Supply-Chain Optimization Template; Algorithm for Computing Particle/Surface Interactions; Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images; and Thermal Transport Model for Heat Sink Design.
Hicks, Amanda; Hogan, William R.; Rutherford, Michael; Malin, Bradley; Xie, Mengjun; Fellbaum, Christiane; Yin, Zhijun; Fabbri, Daniel; Hanna, Josh; Bian, Jiang
2015-01-01
The Institute of Medicine (IOM) recommends that health care providers collect data on gender identity. If these data are to be useful, they should utilize terms that characterize gender identity in a manner that is 1) sensitive to transgender and gender non-binary individuals (trans* people) and 2) semantically structured to render associated data meaningful to the health care professionals. We developed a set of tools and approaches for analyzing Twitter data as a basis for generating hypotheses on language used to identify gender and discuss gender-related issues across regions and population groups. We offer sample hypotheses regarding regional variations in the usage of certain terms such as ‘genderqueer’, ‘genderfluid’, and ‘neutrois’ and their usefulness as terms on intake forms. While these hypotheses cannot be directly validated with Twitter data alone, our data and tools help to formulate testable hypotheses and design future studies regarding the adequacy of gender identification terms on intake forms. PMID:26958196
Use of limited data to construct Bayesian networks for probabilistic risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, Katrina M.; Swiler, Laura Painton
2013-03-01
Probabilistic Risk Assessment (PRA) is a fundamental part of safety/quality assurance for nuclear power and nuclear weapons. Traditional PRA very effectively models complex hardware system risks using binary probabilistic models. However, traditional PRA models are not flexible enough to accommodate non-binary soft-causal factors, such as digital instrumentation&control, passive components, aging, common cause failure, and human errors. Bayesian Networks offer the opportunity to incorporate these risks into the PRA framework. This report describes the results of an early career LDRD project titled %E2%80%9CUse of Limited Data to Construct Bayesian Networks for Probabilistic Risk Assessment%E2%80%9D. The goal of the work was tomore » establish the capability to develop Bayesian Networks from sparse data, and to demonstrate this capability by producing a data-informed Bayesian Network for use in Human Reliability Analysis (HRA) as part of nuclear power plant Probabilistic Risk Assessment (PRA). This report summarizes the research goal and major products of the research.« less
Arayasirikul, Sean; Chen, Yea-Hung; Jin, Harry; Wilson, Erin
2015-01-01
Respondent-driven sampling (RDS) peer referral has been proven to be an effective recruitment method for hard-to-reach populations; however, its application in diverse populations is limited. Recruitment occurred in two phases: RDS-only followed by development and implementation of an online social network strategy in combination with RDS peer referral (RDS+SNS). Compared to RDS-only, RDS+SNS reached a sample that was younger (χ2=9.19, P = .03), more likely to identify with a non-binary gender identity (χ2=10.4247, P = .03), with less housing instability (50.5% vs. 68.6%, χ2=9.0038, P = .002) and less sex work (19.7% vs. 31.4%, χ2=5.0798, P = .02). Additionally, we describe lessons learned as a result of implementing our online social network strategy. Our findings underscore the importance of integrating Internet-driven strategies to meet challenges in sample diversity and recruitment of young transwomen. PMID:26499337
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
High speed and adaptable error correction for megabit/s rate quantum key distribution
Dixon, A. R.; Sato, H.
2014-01-01
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416
A study of binary Kuiper belt objects
NASA Astrophysics Data System (ADS)
Kern, Susan Diane
2006-06-01
About 10 5 bodies larger than 100km in diameter (Jewitt 1998) reside in the Kuiper Belt, beyond the orbit of Neptune. Since 1992 observational surveys have discovered over one thousand of these objects, believed to be fossil remnants of events that occurred nearly 4.5 billion years ago. Sixteen of these objects are currently known to be binaries, and many more are expected to be discovered. As part of the Deep Ecliptic Survey (DES) I have helped catalog nearly one third of the known Kuiper Belt object (KBO) population, and used that database for further physical studies. Recovery observations for dynamical studies of newly discovered objects with the Magellan telescopes and a high resolution imager, MagIC, revealed three binaries, 88611 (2001QT297), 2003QY90, and 2005EO304. One binary was found in the discovery observations, 2003UN284. Lightcurve measurements of these, and other non-binary KBOs, were obtained to look for unique rotational characteristics. Eleven of thirty-three objects, excluding the binaries, were found to have measurable variability. One of these objects, 2002GW32 has a particularly large amplitude (> 1 magnitude) of variability, and 2002GP32 has a relatively short (~3.3 hours, single-peaked) lightcurve. Among the binary population all the observed objects showed some level of variation. The secondary of 88611 was fit with a single-peaked period of 5.5±0.02 hours while the primary component appears to be non-variable above the measurement errors (0.05 magnitudes). Neither component appears to be color variable. The components of 2003QY90 are both highly variable yielding single- peaked rotation periods of 3.5±1.1 and 7.2±2.9 hours with amplitudes of 0.34±0.06 and 0.90±0.18 magnitudes, respectively. The rotation periods are comparable to those of other non-binary KBOs although distinct from that of an identified contact binary. Orbits and partial orbits for Kuiper belt binaries (KBBs) show a wide range of eccentricities, and an increasing number of binaries with decreasing binary semi-major axis. These characteristics exclude the formation models proposed by Funato et al. (2003) and Weidenschilling (2002), respectively. Conversely, the formation models of Astakhov et al. (2005) and Goldreich et al. (2002) appear to describe the observations, at least in part. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Mastrodicasa, Domenico; Elgavish, Gabriel A; Schoepf, U Joseph; Suranyi, Pal; van Assen, Marly; Albrecht, Moritz H; De Cecco, Carlo N; van der Geest, Rob J; Hardy, Rayphael; Mantini, Cesare; Griffith, L Parkwood; Ruzsics, Balazs; Varga-Szemes, Akos
2018-02-15
Binary threshold-based quantification techniques ignore myocardial infarct (MI) heterogeneity, yielding substantial misquantification of MI. To assess the technical feasibility of MI quantification using percent infarct mapping (PIM), a prototype nonbinary algorithm, in patients with suspected MI. Prospective cohort POPULATION: Patients (n = 171) with suspected MI referred for cardiac MRI. Inversion recovery balanced steady-state free-precession for late gadolinium enhancement (LGE) and modified Look-Locker inversion recovery (MOLLI) T 1 -mapping on a 1.5T system. Infarct volume (IV) and infarct fraction (IF) were quantified by two observers based on manual delineation, binary approaches (2-5 standard deviations [SD] and full-width at half-maximum [FWHM] thresholds) in LGE images, and by applying the PIM algorithm in T 1 and LGE images (PIM T1 ; PIM LGE ). IV and IF were analyzed using repeated measures analysis of variance (ANOVA). Agreement between the approaches was determined with Bland-Altman analysis. Interobserver agreement was assessed by intraclass correlation coefficient (ICC) analysis. MI was observed in 89 (54.9%) patients, and 185 (38%) short-axis slices. IF with 2, 3, 4, 5SDs and FWHM techniques were 15.7 ± 6.6, 13.4 ± 5.6, 11.6 ± 5.0, 10.8 ± 5.2, and 10.0 ± 5.2%, respectively. The 5SD and FWHM techniques had the best agreement with manual IF (9.9 ± 4.8%) determination (bias 1.0 and 0.2%; P = 0.1426 and P = 0.8094, respectively). The 2SD and 3SD algorithms significantly overestimated manual IF (9.9 ± 4.8%; both P < 0.0001). PIM LGE measured significantly lower IF (7.8 ± 3.7%) compared to manual values (P < 0.0001). PIM LGE , however, showed the best agreement with the PIM T1 reference (7.6 ± 3.6%, P = 0.3156). Interobserver agreement was rated good to excellent for IV (ICCs between 0.727-0.820) and fair to good for IF (0.589-0.736). The application of the PIM LGE technique for MI quantification in patients is feasible. PIM LGE , with its ability to account for voxelwise MI content, provides significantly smaller IF than any thresholding technique and shows excellent agreement with the T 1 -based reference. 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.
We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less
Arayasirikul, Sean; Chen, Yea-Hung; Jin, Harry; Wilson, Erin
2016-06-01
Respondent-driven sampling (RDS) peer referral has been proven to be an effective recruitment method for hard-to-reach populations; however, its application in diverse populations is limited. Recruitment occurred in two phases: RDS-only followed by development and implementation of an online social network strategy in combination with RDS peer referral (RDS + SNS). Compared to RDS-only, RDS + SNS reached a sample that was younger (χ(2) = 9.19, P = .03), more likely to identify with a non-binary gender identity (χ(2) = 10.4247, P = .03), with less housing instability (50.5 vs. 68.6 %, χ(2) = 9.0038, P = .002) and less sex work (19.7 vs. 31.4 %, χ(2) = 5.0798, P = .02). Additionally, we describe lessons learned as a result of implementing our online social network strategy. Our findings underscore the importance of integrating Internet-driven strategies to meet challenges in sample diversity and recruitment of young transwomen.
Development and characterisation of FPGA modems using forward error correction for FSOC
NASA Astrophysics Data System (ADS)
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
Establishment and characterization of a unique 1 microm diameter liver-derived progenitor cell line.
Aravalli, Rajagopal N; Behnan Sahin, M; Cressman, Erik N K; Steer, Clifford J
2010-01-01
Liver-derived progenitor cells (LDPCs) are recently identified novel stem/progenitor cells from healthy, unmanipulated adult rat livers. They are distinct from other known liver stem/progenitor cells such as the oval cells. In this study, we have generated a LDPC cell line RA1 by overexpressing the simian virus 40 (SV40) large T antigen (TAg) in primary LDPCs. This cell line was propagated continuously for 55 passages in culture, after which it became senescent. Interestingly, following transformation with SV40 TAg, LDPCs decreased in size significantly and the propagating cells measured 1 microm in diameter. RA1 cells proliferated in vitro with a doubling time of 5-7 days, and expressed cell surface markers of LDPCs. In this report, we describe the characterization of this novel progenitor cell line that might serve as a valuable model to study liver cell functions and stem cell origin of liver cancers. Copyright 2009 Elsevier Inc. All rights reserved.
Mental Health Disparities Among Canadian Transgender Youth.
Veale, Jaimie F; Watson, Ryan J; Peter, Tracey; Saewyc, Elizabeth M
2017-01-01
This study documented the prevalence of mental health problems among transgender youth in Canada and made comparisons with population-based studies. This study also compared gender identity subgroups and age subgroups (14-18 and 19-25). A nonprobability sample of 923 transgender youth from Canada completed an online survey. Participants were recruited through community organizations, health care settings, social media, and researchers' networks. Mental health measures were drawn from the British Columbia Adolescent Health Survey and the Canadian Community Health Survey. Transgender youth had a higher risk of reporting psychological distress, self-harm, major depressive episodes, and suicide. For example, 65% of transgender 14- to 18-year olds seriously considered suicide in the past year compared with 13% in the British Columbia Adolescent Health Survey, and only a quarter of participants reported their mental health was good or excellent. Transgender boys/men and nonbinary youth were most likely to report self-harm and overall mental health remained stable across age subgroups. Although a notable minority of transgender youth did not report negative health outcomes, this study shows the mental health disparities faced by transgender youth in Canada are considerable. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
The Local Stellar Velocity Field via Vector Spherical Harmonics
NASA Technical Reports Server (NTRS)
Makarov, V. V.; Murphy, D. W.
2007-01-01
We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism.We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) = (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) = (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star...
NASA Astrophysics Data System (ADS)
Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing
2015-03-01
It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.
Man, You Might Look Like a Woman—If a Child Is Next to You
Brielmann, Aenne A.; Gaetano, Justin; Stolarova, Margarita
2015-01-01
Gender categorization seems prone to a pervasive bias: Persons about whom null or ambiguous gender information is available are more often considered male than female. Our study assessed whether such a male-bias is present in non-binary choice tasks and whether it can be altered by social contextual information. Participants were asked to report their perception of an adult figure’s gender in three context conditions: (1) alone, (2) passively besides a child, or (3) actively helping a child (n = 10 pictures each). The response options male, female and I don’t know were provided. As a result, participants attributed male gender to most figures and rarely used the I don’t know option in all conditions, but were more likely to attribute female gender to the same adult figure if it was shown with a child. If such social contextual information was provided in the first rather than the second block of the experiment, subsequent female gender attributions increased for adult figures shown alone. Additionally, female gender attributions for actively helping relative to passive adults were made more often. Thus, we provide strong evidence that gender categorization can be altered by social context even if the subject of gender categorization remains identical. PMID:26435759
Barr, Sebastian M; Budge, Stephanie L; Adelson, Jill L
2016-01-01
This study examined transgender community belongingness as a mediator between strength of transgender identity and well-being. A total of 571 transgender adults (n = 209 transgender women, n = 217 transgender men, and n = 145 nonbinary-identified individuals) completed an online survey assessing transgender community belongingness, strength of transgender identity (operationalized as the extent to which a person self-categorizes their identity as transgender and the extent to which they believe their gender transition to be important to their self-definition), and well-being (using measures of self-esteem, satisfaction with life, and psychological well-being). Structural equation modeling was used to analyze the data. When controlling for participants' income, age, and stage of gender transition, transgender community belongingness fully mediated the relationship between strength of transgender identity and well-being. Strength of transgender identity was indirectly and positively related to well-being through community belongingness, but was not directly related to well-being. Results suggest that transgender community belongingness is an important construct in the mental health of transgender people. The strength of a person's transgender identity also appears to be a significant construct in transgender people's well-being via its relationship with transgender community belongingness. Implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán
2017-04-24
We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.
Bednarczyk, Robert A; Whitehead, Jennifer L; Stephenson, Rob
2017-06-01
While national human papillomavirus (HPV) vaccination estimates exist by sex, little is known about HPV vaccination rates by gender identity. We conducted a self-administered, anonymous online cross-sectional survey, with recruitment through Facebook ads, of lesbian, gay, bisexual, and transgender individuals in rural areas of the US. We compared HPV vaccine recommendation and uptake by self-reported sex assigned at birth and current gender identity. Six hundred sixty respondents were age eligible for HPV vaccination: 84% reported gender identity aligned with their sex assigned at birth, while 10% reported gender identity the differed from their sex assigned at birth; an additional 6% reported non-binary gender identity. Only 14% of male sex assigned at birth and 44% of female sex assigned at birth received HPV vaccine, similar to estimates by current gender identity. Transgender respondents' HPV vaccination experience mirrored that of cisgender respondents with regard to sex assigned at birth. Providers may base HPV vaccine recommendations on individuals' sex assigned at birth, which may impact transgender individuals' vaccine coverage. Future HPV vaccine uptake studies should account for gender identity. With sex-specific catch-up HPV vaccination recommendations, the role of gender identity on provider recommendation and reimbursement needs to be addressed. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Katz-Wise, Sabra L; Reisner, Sari L; Hughto, Jaclyn White; Keo-Meier, Colton L
2016-01-01
This study characterized sexual orientation identities and sexual fluidity in attractions in a community-based sample of self-identified transgender and gender-nonconforming adults in Massachusetts. Participants were recruited in 2013 using bimodel methods (online and in person) to complete a one-time, Web-based quantitative survey that included questions about sexual orientation identity and sexual fluidity. Multivariable logistic regression models estimated adjusted risk ratios (aRRs) and 95% confidence intervals (95% CIs) to examine the correlates of self-reported changes in attractions ever in lifetime among the whole sample (n = 452) and after transition among those who reported social gender transition (n = 205). The sample endorsed diverse sexual orientation identities: 42.7% queer, 19.0% other nonbinary, 15.7% bisexual, 12.2% straight, and 10.4% gay/lesbian. Overall, 58.2% reported having experienced changes in sexual attractions in their lifetime. In adjusted models, trans masculine individuals were more likely than trans feminine individuals to report sexual fluidity in their lifetime (aRR = 1.69; 95% CI = 1.34, 2.12). Among those who transitioned, 64.6% reported a change in attractions posttransition, and trans masculine individuals were less likely than trans feminine individuals to report sexual fluidity (aRR = 0.44; 95% CI = 0.28, 0.69). Heterogeneity of sexual orientation identities and sexual fluidity in attractions are the norm rather than the exception among gender minority people.
Lindroth, Malin
2016-12-01
With a focus on sexual health and rights, this study describes how transgender people experience meetings with health care professionals. Transgender people face prejudice and discrimination worldwide. Little is known of their experiences in sexual health-promoting settings. Within a descriptive design, 20 persons aged 18-74 and identifying as transgender and nonbinary were interviewed. The results were analysed with constructivist grounded theory. Disrespect among health care professionals is the core category connected to the experiences in the result; transgender people experience estrangement, expectations and eviction in different sexual health-promoting settings. Transgender knowledge needs to be increased in general, in both specialised transgender health care and many other health care settings, to prevent transgender peoples' experiences of estrangement. Moreover, an increased knowledge of, and respect for, sexual health and rights is needed to prevent transgender peoples' exposure to gender binary, cis- and heteronormative expectations. In addition, access to sexual health care is essential following gender-confirmatory care as well to avoid transgender peoples' experiences of eviction from the health care system. Nurses have an important role to play in striving for equity and justice within health care. This study describes how health care professionals appear to be disrespectful and suggestions of how this can be avoided are made. © 2016 John Wiley & Sons Ltd.
Throughput and latency programmable optical transceiver by using DSP and FEC control.
Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki
2017-05-15
We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Wintucky, Edwin G.; Landon, David G.; Sun, Jun Y.; Winn, James S.; Laraway, Stephen; McIntire, William K.; Metz, John L.; Smith, Francis J.
2011-01-01
The paper presents the first ever research and experimental results regarding the combination of a software-defined multi-Gbps modem and a broadband high power space amplifier when tested with an extended form of the industry standard DVB-S2 and LDPC rate 9/10 FEC codec. The modem supports waveforms including QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK, and 128-QAM. The broadband high power amplifier is a space qualified traveling-wave tube (TWT), which has a passband greater than 3 GHz at 33 GHz, output power of 200 W and efficiency greater than 60 percent. The modem and the TWTA together enabled an unprecedented data rate at 20 Gbps with low BER of 10(exp -9). The presented results include a plot of the received waveform constellation, BER vs. E(sub b)/N(sub 0) and implementation loss for each of the modulation types tested. The above results when included in an RF link budget analysis show that NASA s payload data rate can be increased by at least an order of magnitude (greater than 10X) over current state-of-practice, limited only by the spacecraft EIRP, ground receiver G/T, range, and available spectrum or bandwidth.
Ka-Band, Multi-Gigabit-Per-Second Transceiver
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Wintucky, Edwin G.; Smith, Francis J.; Harris, Johnny M.; Landon, David G.; Haddadin, Osama S.; McIntire, William K.; Sun, June Y.
2011-01-01
A document discusses a multi-Gigabit-per-second, Ka-band transceiver with a software-defined modem (SDM) capable of digitally encoding/decoding data and compensating for linear and nonlinear distortions in the end-to-end system, including the traveling-wave tube amplifier (TWTA). This innovation can increase data rates of space-to-ground communication links, and has potential application to NASA s future spacebased Earth observation system. The SDM incorporates an extended version of the industry-standard DVB-S2, and LDPC rate 9/10 FEC codec. The SDM supports a suite of waveforms, including QPSK, 8-PSK, 16-APSK, 32- APSK, 64-APSK, and 128-QAM. The Ka-band and TWTA deliver an output power on the order of 200 W with efficiency greater than 60%, and a passband of at least 3 GHz. The modem and the TWTA together enable a data rate of 20 Gbps with a low bit error rate (BER). The payload data rates for spacecraft in NASA s integrated space communications network can be increased by an order of magnitude (>10 ) over current state-of-practice. This innovation enhances the data rate by using bandwidth-efficient modulation techniques, which transmit a higher number of bits per Hertz of bandwidth than the currently used quadrature phase shift keying (QPSK) waveforms.
Katz-Wise, Sabra L.; Reisner, Sari L.; White, Jaclyn M.; Keo-Meier, Colton L.
2015-01-01
This study characterized sexual orientation identities and sexual fluidity in attractions in a community-based sample of self-identified transgender and gender nonconforming adults in Massachusetts. Participants were recruited in 2013 using bi-model methods (online and in-person) to complete a one-time web-based quantitative survey that included questions about sexual orientation identity and sexual fluidity. Multivariable logistic regression models estimated Adjusted Risk Ratios (aRR) and 95% Confidence Intervals (95% CI) to examine the correlates of self-reported changes in attractions ever in lifetime among the whole sample (n=452) and after transition among those who reported social gender transition (n=205). The sample endorsed diverse sexual orientation identities: 42.7% queer, 19.0% other non-binary, 15.7% bisexual, 12.2% straight, 10.4% gay/lesbian. Overall, 58.2% reported having experienced changes in sexual attractions in their lifetime. In adjusted models, trans masculine individuals were more likely than trans feminine individuals to report sexual fluidity in their lifetime (aRR=1.69; 95% CI=1.34, 2.12). Among those who transitioned, 64.6% reported a change in attractions post-transition and trans masculine individuals were less likely than trans feminine individuals to report sexual fluidity (aRR=0.44; 95% CI=0.28, 0.69). Heterogeneity of sexual orientation identities and sexual fluidity in attractions are the norm rather than the exception among gender minority people. PMID:26156113
Carrotte, Elise R; Vella, Alyce M; Bowring, Anna L; Douglass, Caitlin; Hellard, Margaret E; Lim, Megan S C
2016-07-27
Heteronormativity describes a set of norms and assumptions pertaining to heterosexual identities and binary gender. In 2015, we conducted our annual Sex, Drugs and Rock'n'Roll study, an online health survey of over 1000 Victorians aged 15-29 years. Feedback from participants suggested that our survey contained heteronormative language. In response to this, we aimed to make inclusive changes to our survey via consultation with young gender and sexually diverse (GSD) people. We conducted two semi-structured focus groups in Melbourne with a total of 16 participants (age range: 21-28 years). Participants were mostly cisgender women, and there were two transgender participants and one non-binary participant. Participants also had a range of sexual identities including lesbian, queer, bisexual, pansexual, and asexual. Focus group discussions were transcribed verbatim and analysed thematically. Most participants indicated heteronormativity affects their lives in multiple ways, noting its impacts on access to sexual healthcare, invalidating sexual experiences and miscommunication in forms and surveys. Overall, participants emphasised the need for sexual health research to avoid assumptions about behaviour, to be clear and eliminate question ambiguity and avoiding treating gender as binary. Participants also discussed how the Sex, Drugs and Rock'n'Roll survey could address a range of sexual behaviours and experiences, rather than focusing on penetrative sex, which many participants found invalidating. Our findings have important implications for future health surveys aimed at general populations. We present recommendations that encourage research to be more inclusive to ensure data collection from GSD participants is respectful and rigorous.
Beckwith, Noor; Reisner, Sari L; Zaslow, Shayne; Mayer, Kenneth H; Keuroghlian, Alex S
2017-01-01
Purpose: Gender-affirming surgeries and hormone therapy are medically necessary treatments to alleviate gender dysphoria; however, significant gaps exist in the research and clinical literature on surgery utilization and age of hormone therapy initiation among transgender adults. Methods: We conducted a retrospective review of electronic health record data from a random sample of 201 transgender patients of ages 18-64 years who presented for primary care between July 1, 2010 and June 30, 2015 (inclusive) at an urban community health center in Boston, MA. Fifty percent in our analyses were trans masculine (TM), 50% trans feminine, and 24% reported a genderqueer/nonbinary gender identity. Regression models were fit to assess demographic, gender identity-related, sexual history, and mental health correlates of gender-affirming surgery and of age of hormone therapy initiation. Results: Overall, 95% of patients were prescribed hormones by their primary care provider, and the mean age of initiation of masculinizing or feminizing hormone prescriptions was 31.8 years (SD=11.1). Younger age of initiation of hormone prescriptions was associated with being TM, being a student, identifying as straight/heterosexual, having casual sexual partners, and not having past alcohol use disorder. Approximately one-third (32%) had a documented history of gender-affirming surgery. Factors associated with increased odds of surgery were older age, higher income levels, not identifying as bisexual, and not having a current psychotherapist. Conclusion: This study extends our understanding of prevalence and factors associated with gender-affirming treatments among transgender adults seeking primary care. Findings can inform future interventions to expand delivery of clinical care for transgender patients.
Fractional labelmaps for computing accurate dose volume histograms
NASA Astrophysics Data System (ADS)
Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor
2017-03-01
PURPOSE: In radiation therapy treatment planning systems, structures are represented as parallel 2D contours. For treatment planning algorithms, structures must be converted into labelmap (i.e. 3D image denoting structure inside/outside) representations. This is often done by triangulated a surface from contours, which is converted into a binary labelmap. This surface to binary labelmap conversion can cause large errors in small structures. Binary labelmaps are often represented using one byte per voxel, meaning a large amount of memory is unused. Our goal is to develop a fractional labelmap representation containing non-binary values, allowing more information to be stored in the same amount of memory. METHODS: We implemented an algorithm in 3D Slicer, which converts surfaces to fractional labelmaps by creating 216 binary labelmaps, changing the labelmap origin on each iteration. The binary labelmap values are summed to create the fractional labelmap. In addition, an algorithm is implemented in the SlicerRT toolkit that calculates dose volume histograms (DVH) using fractional labelmaps. RESULTS: We found that with manually segmented RANDO head and neck structures, fractional labelmaps represented structure volume up to 19.07% (average 6.81%) more accurately than binary labelmaps, while occupying the same amount of memory. When compared to baseline DVH from treatment planning software, DVH from fractional labelmaps had agreement acceptance percent (1% ΔD, 1% ΔV) up to 57.46% higher (average 4.33%) than DVH from binary labelmaps. CONCLUSION: Fractional labelmaps promise to be an effective method for structure representation, allowing considerably more information to be stored in the same amount of memory.
It is complicated: gender and sexual orientation identity in LGBTQ youth.
Bosse, Jordon D; Chiodo, Lisa
2016-12-01
To explore the variations of sexual orientation and gender identity as well as the intersections of those identities in a sample of lesbian, gay, bisexual, transgender, queer, and questioning youth. Identity development is a key task of adolescence. Among the multiple identities that young people navigate are sexual orientation and gender identity. Challenges with solidifying and integrating aspects of one's identity can contribute to poor physical and mental health outcomes. Cross-sectional descriptive survey. A convenience sample was recruited via collaborations with community organisations and Internet groups who provide information and services for LGBTQ youth under the age of 25. Of the 175 respondents, one-third of the sample reported a gender identity that was not congruent with their sex assigned at birth. Those assigned female sex at birth reported noncongruent gender identities as well as fluid and nonbinary identities such as genderqueer and agender more frequently that respondents assigned male at birth. Individuals with noncongruent gender identities were more likely to identify with a sexual orientation other than lesbian, gay or bisexual than individuals with gender identities congruent with their sex assigned at birth. Adolescent sexual orientation and gender identity are complex and nuanced. Nurse scientists and clinical nurses can contribute to understanding of these identities, their meaning to the young person and the unique health implications by regularly inquiring about sexual orientation and gender identity in their practice. Nurses in clinical practice need to be aware of the sometimes complicated nature of adolescent identity and its related terminology so that they can ask relevant questions and provide culturally safe care. © 2016 John Wiley & Sons Ltd.
Outness, Stigma, and Primary Health Care Utilization among Rural LGBT Populations.
Whitehead, J; Shaver, John; Stephenson, Rob
2016-01-01
Prior studies have noted significant health disadvantages experienced by LGBT (lesbian, gay, bisexual, and transgender) populations in the US. While several studies have identified that fears or experiences of stigma and disclosure of sexual orientation and/or gender identity to health care providers are significant barriers to health care utilization for LGBT people, these studies have concentrated almost exclusively on urban samples. Little is known about the impact of stigma specifically for rural LGBT populations, who may have less access to quality, LGBT-sensitive care than LGBT people in urban centers. LBGT individuals residing in rural areas of the United States were recruited online to participate in a survey examining the relationship between stigma, disclosure and "outness," and utilization of primary care services. Data were collected and analyzed regarding LGBT individuals' demographics, health care access, health risk factors, health status, outness to social contacts and primary care provider, and anticipated, internalized, and enacted stigmas. Higher scores on stigma scales were associated with lower utilization of health services for the transgender & non-binary group, while higher levels of disclosure of sexual orientation were associated with greater utilization of health services for cisgender men. The results demonstrate the role of stigma in shaping access to primary health care among rural LGBT people and point to the need for interventions focused towards decreasing stigma in health care settings or increasing patients' disclosure of orientation or gender identity to providers. Such interventions have the potential to increase utilization of primary and preventive health care services by LGBT people in rural areas.
Outness, Stigma, and Primary Health Care Utilization among Rural LGBT Populations
Whitehead, J.; Shaver, John; Stephenson, Rob
2016-01-01
Background Prior studies have noted significant health disadvantages experienced by LGBT (lesbian, gay, bisexual, and transgender) populations in the US. While several studies have identified that fears or experiences of stigma and disclosure of sexual orientation and/or gender identity to health care providers are significant barriers to health care utilization for LGBT people, these studies have concentrated almost exclusively on urban samples. Little is known about the impact of stigma specifically for rural LGBT populations, who may have less access to quality, LGBT-sensitive care than LGBT people in urban centers. Methodology LBGT individuals residing in rural areas of the United States were recruited online to participate in a survey examining the relationship between stigma, disclosure and “outness,” and utilization of primary care services. Data were collected and analyzed regarding LGBT individuals’ demographics, health care access, health risk factors, health status, outness to social contacts and primary care provider, and anticipated, internalized, and enacted stigmas. Results Higher scores on stigma scales were associated with lower utilization of health services for the transgender & non-binary group, while higher levels of disclosure of sexual orientation were associated with greater utilization of health services for cisgender men. Conclusions The results demonstrate the role of stigma in shaping access to primary health care among rural LGBT people and point to the need for interventions focused towards decreasing stigma in health care settings or increasing patients’ disclosure of orientation or gender identity to providers. Such interventions have the potential to increase utilization of primary and preventive health care services by LGBT people in rural areas. PMID:26731405
The Mental Health of Transgender Youth: Advances in Understanding.
Connolly, Maureen D; Zervos, Marcus J; Barone, Charles J; Johnson, Christine C; Joseph, Christine L M
2016-11-01
This review provides an update on the growing body of research related to the mental health of transgender youth that has emerged since the 2011 publication of the Institute of Medicine report on the health of lesbian, gay, bisexual, and transgender people. The databases PubMed and Ovid Medline were searched for studies that were published from January 2011 to March 2016 in English. The following search terms were used: transgender, gender nonconforming, gender minority, gender queer, and gender dysphoria. Age limits included the terms youth, child, children, teenager*, and adolescen*. The combined search produced 654 articles of potential relevance. The resulting abstracts went through a tiered elimination system, and the remaining 15 articles, which presented quantitative data related to the prevalence of transgender youth and their mental health, were included in the present review. In addition to providing new estimates of the number of young people who identify as transgender (.17%-1.3%), studies since 2011 have shown that transgender youth have higher rates of depression, suicidality and self-harm, and eating disorders when compared with their peers. Gender-affirming medical therapy and supported social transition in childhood have been shown to correlate with improved psychological functioning for gender-variant children and adolescents. Recent research has demonstrated increased rates of psychiatric morbidity among transgender youth compared to their peers. Future work is needed to understand those youth who identify as gender nonbinary, improve methods to capture and understand diverse gender identities and related health disparities, and delineate the social determinants of such disparities. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Nieder, Timo O; Köhler, Andreas; Eyssel, Jana; Briken, Peer
2017-09-01
Both internationally and nationally, the objective has been set to offer specialized, coordinated, and interdisciplinary treatment to individuals with rare clinical needs, such as trans individuals (e. g., transsexual, transgender). The Interdisciplinary Transgender Health Care Center Hamburg (ITHCCH) is the first and only center in Germany to integrate all disciplines relevant to trans healthcare (THC).The research project seeks to generate valid information to support quality development and assure high-quality treatment at the ITHCCH. This was done by (a) investigating needs and concerns of trans individuals regarding interdisciplinary THC, and (b) analyzing attitudes and interests of key stakeholders.Using a participatory approach involving a work group (representatives of trans support groups and local THC professionals), researchers developed an online survey focusing on trans individuals' needs and concerns. Data from N = 415 trans-identified participants were analyzed using quantitative and qualitative methods. In addition, a short survey was used to record key stakeholders' attitudes and interests towards the ITHCCH.Healthcare offers accessed by trans individuals as part of transition related treatment vary in focus and number. For example, take-up numbers of genital surgery differ between binary and non-binary trans individuals. Crucial aspects impacting on THC quality are structural characteristics, communication/social aspects, individuality, and professionalism/quality.To ensure successful, high-quality interdisciplinary THC, feedback from (potential) patients and stakeholders is crucial. In addition, both structural development and optimizing individuality and flexibility throughout the treatment process are key. This poses a considerable challenge to the sector of THC provision.
Geometric correction methods for Timepix based large area detectors
NASA Astrophysics Data System (ADS)
Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.
2017-01-01
X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.
NASA Astrophysics Data System (ADS)
Wang, Yao; Vijaya Kumar, B. V. K.
2017-05-01
The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.
Self-Reported Changes in Attractions and Social Determinants of Mental Health in Transgender Adults.
Katz-Wise, Sabra L; Reisner, Sari L; White Hughto, Jaclyn M; Budge, Stephanie L
2017-07-01
This study examined associations between changes in self-reported attractions and mental health in a community-based sample of self-identified transgender adults. Participants were purposively recruited in 2013 using bimodal sampling methods and completed a one-time survey. Multivariable logistic regression models estimated adjusted risk ratios and 95 % confidence intervals to examine associations between changes in attractions and mental health outcomes (lifetime self-harm, suicide attempts, depression diagnosis; past-week clinically significant depressive distress assessed via CES-D 10) among the entire sample (N = 452; 285 female-to-male spectrum, 167 male-to-female spectrum) and after gender transition among those who had socially transitioned (n = 205; 156 female-to-male spectrum, 49 male-to-female spectrum). Models were adjusted for known population social determinants (age, race/ethnicity, gender identity, socioeconomic status, sexual orientation identity), transgender-specific determinants (age of transgender realization, social transition, medical transition, visual gender nonconformity, non-binary gender identification), and survey mode (online vs. in-person sampling). Lifetime changes in attractions were significantly associated with increased probability of all mental health outcomes; individuals reporting any change in attractions were more likely than individuals not reporting changes to indicate lifetime self-harm, suicide attempts, depression diagnosis, and current depressive distress (all ps < .05). Changes in attractions post-social transition were not significantly associated with mental health outcomes. Many, but not all, population and transgender-specific social determinants were significantly associated with mental health in the full sample and among those who had socially transitioned. Clinical implications of findings about changes in attractions and mental health are discussed for transgender individuals.
Linear game non-contextuality and Bell inequalities—a graph-theoretic approach
NASA Astrophysics Data System (ADS)
Rosicka, M.; Ramanathan, R.; Gnaciński, P.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Severini, S.
2016-04-01
We study the classical and quantum values of a class of one- and two-party unique games, that generalizes the well-known XOR games to the case of non-binary outcomes. In the bipartite case the generalized XOR (XOR-d) games we study are a subclass of the well-known linear games. We introduce a ‘constraint graph’ associated to such a game, with the constraints defining the game represented by an edge-coloring of the graph. We use the graph-theoretic characterization to relate the task of finding equivalent games to the notion of signed graphs and switching equivalence from graph theory. We relate the problem of computing the classical value of single-party anti-correlation XOR games to finding the edge bipartization number of a graph, which is known to be MaxSNP hard, and connect the computation of the classical value of XOR-d games to the identification of specific cycles in the graph. We construct an orthogonality graph of the game from the constraint graph and study its Lovász theta number as a general upper bound on the quantum value even in the case of single-party contextual XOR-d games. XOR-d games possess appealing properties for use in device-independent applications such as randomness of the local correlated outcomes in the optimal quantum strategy. We study the possibility of obtaining quantum algebraic violation of these games, and show that no finite XOR-d game possesses the property of pseudo-telepathy leaving the frequently used chained Bell inequalities as the natural candidates for such applications. We also show this lack of pseudo-telepathy for multi-party XOR-type inequalities involving two-body correlation functions.
Valentine, Sarah E; Peitzmeier, Sarah M; King, Dana S; O'Cleirigh, Conall; Marquez, Samantha M; Presley, Cara; Potter, Jennifer
2017-08-01
We investigated the odds of intimate partner violence (IPV) among primary care patients across subgroups of transgender and gender nonconforming (TGNC) individuals relative to cisgender women, and cisgender sexual minority men and women relative to cisgender heterosexual men and women. Participants completed an IPV screener as part of routine primary care visits at an urban community health center (N = 7572). Electronic medical record data were pooled for all patients who received the IPV screener January 1 to December 31, 2014. Overall, 3.6% of the sample reported experiencing physical or sexual IPV in the past year. Compared to cisgender women (past-year prevalence 2.7%), all TGNC subgroups reported elevated odds of physical or sexual IPV, including transgender women (past-year prevalence 12.1%; adjusted odds ratio [AOR] = 5.0, 95% confidence interval [CI] = 2.9-8.6), transgender men (6.6%; AOR = 2.4, 95% CI: 1.2-4.6), gender non-binary individuals (8.2%, AOR = 3.1, 95% CI = 1.7-5.4), and TGNC individuals who did not report their gender identity (9.1%; AOR = 3.7, 95% CI = 2.2-6.3). The prevalence of isolation-related IPV and controlling behaviors was also high in some TGNC groups. Our findings support that IPV is prevalent across genders and sexual orientations. Clinical guidelines for IPV screening should be expanded to include TGNC individuals and not just cisgender women. Future research could explore the complex patterns by which individuals of different genders are at increased risk for different types of IPV, and investigate the best ways to screen TGNC patients and support TGNC survivors.
NASA Astrophysics Data System (ADS)
Abd-El-Barr, Mostafa
2010-12-01
The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.
Demant, Daniel; Hides, Leanne; White, Katherine M; Kavanagh, David J
2018-06-01
Research shows disproportionate levels of substance use among sexual minority young people. A range of reasons for these disparities have been suggested, including connectedness to and participation in the LGBT community. Little is known about how these constructs are related to substance use involvement in sexual minority (sub)groups or how these relationships are affected by other factors. 1266 young sexual minority Australians completed a cross-sectional online survey. Multiple regressions were conducted to assess associations between connectedness to and participation in the LGBT community on substance use involvement, before and after controlling for other factors such as substance use motives, psychological distress, wellbeing, resilience, minority stress, and age. Most participants identified as homosexual (57%, n=726) and male (54%, n=683). In the overall sample, participation in and connectedness the LGBT community were significantly associated with increased substance use involvement before (F(2,1263)=35.930, p≤0.001, R 2 =0.052) and after controlling for other variables (F(8,1095)=33.538, p≤0.001, R 2 =0.191), with meaningfully higher effect sizes for participation than for connectedness. After controlling for other variables, connectedness only remained significant for homosexuals. Effect sizes for participation were higher for females than males, and bisexuals than homosexuals. However, participation in the LGBT Community was not associated with substance use in participants identifying with a non-binary gender identity. In conclusion, substance use involvement was associated with participation in the LGBT community, but connectedness to the LGBT community only had a weak association with substance use involvement in the homosexual subgroup. Copyright © 2018 Elsevier Ltd. All rights reserved.
Beckwith, Noor; Reisner, Sari L.; Zaslow, Shayne; Mayer, Kenneth H.; Keuroghlian, Alex S.
2017-01-01
Abstract Purpose: Gender-affirming surgeries and hormone therapy are medically necessary treatments to alleviate gender dysphoria; however, significant gaps exist in the research and clinical literature on surgery utilization and age of hormone therapy initiation among transgender adults. Methods: We conducted a retrospective review of electronic health record data from a random sample of 201 transgender patients of ages 18–64 years who presented for primary care between July 1, 2010 and June 30, 2015 (inclusive) at an urban community health center in Boston, MA. Fifty percent in our analyses were trans masculine (TM), 50% trans feminine, and 24% reported a genderqueer/nonbinary gender identity. Regression models were fit to assess demographic, gender identity-related, sexual history, and mental health correlates of gender-affirming surgery and of age of hormone therapy initiation. Results: Overall, 95% of patients were prescribed hormones by their primary care provider, and the mean age of initiation of masculinizing or feminizing hormone prescriptions was 31.8 years (SD=11.1). Younger age of initiation of hormone prescriptions was associated with being TM, being a student, identifying as straight/heterosexual, having casual sexual partners, and not having past alcohol use disorder. Approximately one-third (32%) had a documented history of gender-affirming surgery. Factors associated with increased odds of surgery were older age, higher income levels, not identifying as bisexual, and not having a current psychotherapist. Conclusion: This study extends our understanding of prevalence and factors associated with gender-affirming treatments among transgender adults seeking primary care. Findings can inform future interventions to expand delivery of clinical care for transgender patients. PMID:29159310
Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López
2014-12-01
This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Aligning a Receiving Antenna Array to Reduce Interference; Collecting Ground Samples for Balloon-Borne Instruments; Tethered Pyrotechnic Apparatus for Acquiring a Ground Sample; Enhanced Video-Oculography System; Joint Carrier-Phase Synchronization and LDPC Decoding; Dual-Polarization, Sideband-Separating, Balanced Receiver for 1.5 THz Modular Battery Charge Controller; Efficient Multiplexer FPGA Block Structures Based on G4FETs; VLSI Microsystem for Rapid Bioinformatic Pattern Recognition; Low-Noise Amplifier for 100 to 180 GHz; Improved Fabrication of Ceramic Matrix Composite/Foam Core Integrated Structures; Inert Welding/Brazing Gas Filters and Dryers; Fabricating Copper Nanotubes by Electrodeposition; Reducing Aerodynamic Drag on Empty Open Cargo Vehicles; Rotary Percussive Auto-Gopher for Deep Drilling and Sampling; More About Reconfigurable Exploratory Robotic Vehicles; Thermostatic Valves Containing Silicone-Oil Actuators; Improving Heat Flux Performance of Flat Surface in Spray-Cooling Systems; Treating Fibrous Insulation to Reduce Thermal Conductivity; Silica-Aerogel Composites Opacified with La(sub0.7)Sr(sub0.3)MnO3; Cyclic Oxidation Behavior of CuCrAl Cold-Sprayed Coatings for Reusable Launch Vehicles; Ceramic Fiber Structures for Cryogenic Load-Bearing Applications; Elastomer Reinforced with Carbon Nanotubes; Biologically Inspired Purification and Dispersion of SWCNTs; A Technique for Adjusting Eigenfrequencies of WGM Resonators; Low-Pressure, Field-Ionizing Mass Spectrometer; Modifying Operating Cycles to Increase Stability in a LITS; Chamber for Simulating Martian and Terrestrial Environments; Algorithm for Detecting a Bright Spot in an Image; Extreme Programming: Maestro Style; Adaptive Behavior for Mobile Robots; Protocol for Communication Networking for Formation Flying; Planning Complex Sequences Using Compressed Representations; and Self-Supervised Learning of Terrain Traversability from Proprioceptive Sensors.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
A genetic scale of reading frame coding.
Michel, Christian J
2014-08-21
The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Lummi Nation--Title 20--Code of Laws--Liquor... amendment to Lummi Nation's Title 20--Code of Laws--Liquor Code. The Code regulates and controls the... this amendment to Title 20--Lummi Nation Code of Laws--Liquor Code by Resolution 2011-038 on March 1...
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Beam-dynamics codes used at DARHT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Jr., Carl August
Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.
Amino acid codes in mitochondria as possible clues to primitive codes
NASA Technical Reports Server (NTRS)
Jukes, T. H.
1981-01-01
Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
You've Written a Cool Astronomy Code! Now What Do You Do with It?
NASA Astrophysics Data System (ADS)
Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.
2014-01-01
Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S
2012-03-01
Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Alternative Fuels Data Center: Codes and Standards Resources
codes and standards. Biodiesel Vehicle and Infrastructure Codes and Standards Chart Electric Vehicle and Infrastructure Codes and Standards Chart Ethanol Vehicle and Infrastructure Codes and Standards Chart Natural Gas Vehicle and Infrastructure Codes and Standards Chart Propane Vehicle and Infrastructure Codes and
The study on dynamic cadastral coding rules based on kinship relationship
NASA Astrophysics Data System (ADS)
Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng
2007-06-01
Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
Two Perspectives on the Origin of the Standard Genetic Code
NASA Astrophysics Data System (ADS)
Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa
2014-12-01
The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.
NASA Astrophysics Data System (ADS)
Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria
2017-07-01
In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
New quantum codes derived from a family of antiprimitive BCH codes
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin
The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily
2015-03-30
Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla
ERIC Educational Resources Information Center
Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman
2015-01-01
This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…
Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla
ERIC Educational Resources Information Center
Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman
2014-01-01
This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…
Number of minimum-weight code words in a product code
NASA Technical Reports Server (NTRS)
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
National Underground Mines Inventory
1983-10-01
system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
Multi-wavelength Radio Continuum Emission Studies of Dust-free Red Giants
NASA Technical Reports Server (NTRS)
O'Gorman, Eamon; Harper, Graham M.; Brown, Alexander; Dranke, Stephen; Richards, Anita M. S.
2013-01-01
Multi-wavelength centimeter continuum observations of non-dusty, non-pulsating K spectral-type red giants directly sample their chromospheres and wind acceleration zones. Such stars are feeble emitters at these wavelengths, however, and previous observations have provided only a small number of modest signal-to-noise measurements slowly accumulated over three decades. We present multi-wavelength Karl G. Jansky Very Large Array thermal continuum observations of the wind acceleration zones of two dust-free red giants, Arcturus (alpha Boo: K2 III) and Aldebaran (alpha Tau: K5 III). Importantly, most of our observations of each star were carried out over just a few days, so that we obtained a snapshot of the different stellar atmospheric layers sampled at different wavelengths, independent of any long-term variability. We report the first detections at several wavelengths for each star including a detection at 10 cm (3.0 GHz: S band) for both stars and a 20 cm (1.5 GHz: L band) detection for alpha Boo. This is the first time single (non-binary) luminosity class III red giants have been detected at these continuum wavelengths. Our long-wavelength data sample the outer layers of alpha Boo's atmosphere where its wind velocity is approaching (or possibly has reached) its terminal value and the ionization balance is becoming frozen-in. For alpha Tau, however, our long-wavelength data are still sampling its inner atmosphere, where the wind is still accelerating probably due to its lower mass-loss rate. We compare our data with published semi-empirical models based on ultraviolet data, and the marked deviations highlight the need for new atmospheric models to be developed. Spectral indices are used to discuss the possible properties of the stellar atmospheres, and we find evidence for a rapidly cooling wind in the case of alpha Boo. Finally, we develop a simple analytical wind model for alpha Boo based on our new long-wavelength flux measurements.
Bauer, Greta R; Braimoh, Jessica; Scheim, Ayden I; Dharma, Christoffer
2017-01-01
Given that an estimated 0.6% of the U.S. population is transgender (trans) and that large health disparities for this population have been documented, government and research organizations are increasingly expanding measures of sex/gender to be trans inclusive. Options suggested for trans community surveys, such as expansive check-all-that-apply gender identity lists and write-in options that offer maximum flexibility, are generally not appropriate for broad population surveys. These require limited questions and a small number of categories for analysis. Limited evaluation has been undertaken of trans-inclusive population survey measures for sex/gender, including those currently in use. Using an internet survey and follow-up of 311 participants, and cognitive interviews from a maximum-diversity sub-sample (n = 79), we conducted a mixed-methods evaluation of two existing measures: a two-step question developed in the United States and a multidimensional measure developed in Canada. We found very low levels of item missingness, and no indicators of confusion on the part of cisgender (non-trans) participants for both measures. However, a majority of interview participants indicated problems with each question item set. Agreement between the two measures in assessment of gender identity was very high (K = 0.9081), but gender identity was a poor proxy for other dimensions of sex or gender among trans participants. Issues to inform measure development or adaptation that emerged from analysis included dimensions of sex/gender measured, whether non-binary identities were trans, Indigenous and cultural identities, proxy reporting, temporality concerns, and the inability of a single item to provide a valid measure of sex/gender. Based on this evaluation, we recommend that population surveys meant for multi-purpose analysis consider a new Multidimensional Sex/Gender Measure for testing that includes three simple items (one asked only of a small sub-group) to assess gender identity and lived gender, with optional additions. We provide considerations for adaptation of this measure to different contexts.
MULTI-WAVELENGTH RADIO CONTINUUM EMISSION STUDIES OF DUST-FREE RED GIANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Gorman, Eamon; Harper, Graham M.; Brown, Alexander
2013-10-01
Multi-wavelength centimeter continuum observations of non-dusty, non-pulsating K spectral-type red giants directly sample their chromospheres and wind acceleration zones. Such stars are feeble emitters at these wavelengths, however, and previous observations have provided only a small number of modest signal-to-noise measurements slowly accumulated over three decades. We present multi-wavelength Karl G. Jansky Very Large Array thermal continuum observations of the wind acceleration zones of two dust-free red giants, Arcturus (α Boo: K2 III) and Aldebaran (α Tau: K5 III). Importantly, most of our observations of each star were carried out over just a few days, so that we obtained amore » snapshot of the different stellar atmospheric layers sampled at different wavelengths, independent of any long-term variability. We report the first detections at several wavelengths for each star including a detection at 10 cm (3.0 GHz: S band) for both stars and a 20 cm (1.5 GHz: L band) detection for α Boo. This is the first time single (non-binary) luminosity class III red giants have been detected at these continuum wavelengths. Our long-wavelength data sample the outer layers of α Boo's atmosphere where its wind velocity is approaching (or possibly has reached) its terminal value and the ionization balance is becoming frozen-in. For α Tau, however, our long-wavelength data are still sampling its inner atmosphere, where the wind is still accelerating probably due to its lower mass-loss rate. We compare our data with published semi-empirical models based on ultraviolet data, and the marked deviations highlight the need for new atmospheric models to be developed. Spectral indices are used to discuss the possible properties of the stellar atmospheres, and we find evidence for a rapidly cooling wind in the case of α Boo. Finally, we develop a simple analytical wind model for α Boo based on our new long-wavelength flux measurements.« less
Geoscience Diversity Experiential Simulations (GeoDES) Workshop Report
NASA Astrophysics Data System (ADS)
Houlton, H. R.; Chen, J.; Brown, B.; Samuels, D.; Brinkworth, C.
2017-12-01
The geosciences have to solve increasingly complex problems relating to earth and society, as resources become limited, natural hazards and changes in climate impact larger communities, and as people interacting with Earth become more interconnected. However, the profession has dismally low representation from geoscientists who are from diverse racial, ethnic, or socioeconomic backgrounds, as well as women in leadership roles. This underrepresentation also includes individuals whose gender identity/expression is non-binary or gender-conforming, or those who have physical, cognitive, or emotional disabilities. This lack of diversity ultimately impacts our profession's ability to produce our best science and work with the communities that we strive to protect and serve as stewards of the earth. As part of the NSF GOLD solicitation, we developed a project (Geoscience Diversity Experiential Simulations) to train 30 faculty and administrators to be "champions for diversity" and combat the hostile climates in geoscience departments. We hosted a 3-day workshop in November that used virtual simulations to give participants experience in building the skills to react to situations regarding bias, discrimination, microaggressions, or bullying often cited in geoscience culture. Participants interacted with avatars on screen, who responded to participants' actions and choices, given certain scenarios. The scenarios are framed within a geoscience perspective; we integrated qualitative interview data from informants who experienced inequitable judgement, bias, discrimination, or harassment during their geoscience careers. The simulations gave learners a safe environment to practice and build self-efficacy in how to professionally and productively engage peers in difficult conversations. In addition, we obtained pre-workshop survey data about participants' understanding regarding Diversity, Equity, and Inclusion practices, as well as observation data of participants' responses during the simulations. Follow-up activities include monthly online meetings to engage problem solving and strategy-building skills for catalyzing institutional culture change within departments. This talk will specifically focus on workshop observations and preliminary reactions to the training.
Teaching queer theory at a Normal School.
Bacon, Jen
2006-01-01
This article presents a case study of the ongoing struggle to queer West Chester University at the level of the institution, the curriculum, and the classroom. Part of that struggle includes an effort to establish a policy for free speech that accommodates the values of the institution toward diversity. Another part involves attempts to introduce LGBT Studies into the curriculum, and the resulting debates over whether the curriculum should be "gayer" or "queerer." I discuss the personal struggle to destabilize ready-made categories and encourage non-binary thinking, while honoring the identities we live, and perform, in the classroom. In the last four years, WCU has hired half a dozen out gay or lesbian faculty members, some of whom identify as "queer." In many ways, those faculty members have entered a climate open to new ideas for adding LGBT content to the curriculum and to queering the structure and curriculum of the university. But as faculty, staff, and students engage this cause-along with the broader cause of social justice at the University- we have found that our enemies are often closer than we might have guessed. Detailing the tensions that have characterized the landscape at WCUduring my three years and half years there, this essay elaborates on the epistemological and pedagogical issues that arise when queer Theory meets LGBT Studies in the process of institutional, curricular, and pedagogical reform. I argue that questions about content and method, inclusion and exclusion, and identity and performance can be answered only with a concerted effort and continued attention to the cultural tendency to re-assert binaries while simultaneously learning from them. What is true of West Chester, I argue, is true of the larger social system where the contested terrain of the queer has implications for the choices we make as both stakeholders and deviants in the systems we chronicle and critique.
Bauer, Greta R.; Braimoh, Jessica; Scheim, Ayden I.; Dharma, Christoffer
2017-01-01
Given that an estimated 0.6% of the U.S. population is transgender (trans) and that large health disparities for this population have been documented, government and research organizations are increasingly expanding measures of sex/gender to be trans inclusive. Options suggested for trans community surveys, such as expansive check-all-that-apply gender identity lists and write-in options that offer maximum flexibility, are generally not appropriate for broad population surveys. These require limited questions and a small number of categories for analysis. Limited evaluation has been undertaken of trans-inclusive population survey measures for sex/gender, including those currently in use. Using an internet survey and follow-up of 311 participants, and cognitive interviews from a maximum-diversity sub-sample (n = 79), we conducted a mixed-methods evaluation of two existing measures: a two-step question developed in the United States and a multidimensional measure developed in Canada. We found very low levels of item missingness, and no indicators of confusion on the part of cisgender (non-trans) participants for both measures. However, a majority of interview participants indicated problems with each question item set. Agreement between the two measures in assessment of gender identity was very high (K = 0.9081), but gender identity was a poor proxy for other dimensions of sex or gender among trans participants. Issues to inform measure development or adaptation that emerged from analysis included dimensions of sex/gender measured, whether non-binary identities were trans, Indigenous and cultural identities, proxy reporting, temporality concerns, and the inability of a single item to provide a valid measure of sex/gender. Based on this evaluation, we recommend that population surveys meant for multi-purpose analysis consider a new Multidimensional Sex/Gender Measure for testing that includes three simple items (one asked only of a small sub-group) to assess gender identity and lived gender, with optional additions. We provide considerations for adaptation of this measure to different contexts. PMID:28542498
Winskell, Kate; Sabben, Gaëlle
2016-07-01
There is growing recognition of the health disparities faced by sexual minority populations and the critical role played by sexual stigma in increasing their vulnerability. Experienced, anticipated, and internalized, stigma based on sexual orientation reduces access to HIV/STI prevention and treatment services among African men who have sex with men and has been linked to compromised mental health, risk-taking, and HIV status. It is likely that similar processes undermine the health of sexual minority African women and transgender and non-binary people. There is a need for increased understanding of both the contextual factors and the cultural meanings, or symbolic violence, that inform sexual stigma and harmful stigma management strategies in contexts that are culturally and socio-politically oppressive for sexual and gender minorities. Using thematic data analysis and narrative-based methodologies, we analyzed narratives and essays on same-sex attraction contributed by young people aged 13-24 from ten African countries to a Spring 2013 scriptwriting competition on HIV, sexuality, and related themes. Submitted by 27 male and 29 female authors, the texts were written in response to a prompt inviting participants to "Tell a story about someone who is attracted to people of the same sex". We analyzed the ways in which sexual stigma and its effects are described, enacted, and counteracted in the texts. The data provide insights into the social and symbolic processes that create and sustain sexual stigma in the context of broader transnational discourses. The data shed light on psychosocial challenges faced by sexual minority youth and identify both rhetoric, stereotypes, and discourse that devalue them and representations that counteract this symbolic violence. We share our findings in the hope they may inform education and communication programming as part of multi-level efforts to improve the health and human rights of sexual minority populations in sub-Saharan Africa. Copyright © 2016 Elsevier Ltd. All rights reserved.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Technical Reports Server (NTRS)
Dolinar, S.
1988-01-01
Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.
ERIC Educational Resources Information Center
Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed
2013-01-01
Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…
The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...
Hyperbolic and semi-hyperbolic surface codes for quantum storage
NASA Astrophysics Data System (ADS)
Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.
2017-09-01
We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
PNPN Latchup in Bipolar LSI Devices.
1982-01-01
6611, J. Ritter ATTN: STEWS -TE-N, T. Arellanes ATTN: Code 6612, G. McLane ATTN: STEWS -TE-AN, R. Dutchover ATTN: Code 6816, H. Hughes ATTN: STEWS -TE...AN, R. Hays ATTN: Code 6653, A. Namenson * ATTN: STEWS -TE-N, K. Cummings ATTN: Code 6611, L. August ATTN: STEWS -TE-NT, M. Squires ATTN: Code 6813, J...Killiany ATTN: STEWS -TE-AN, J. Meason ATTN: Code 6600, J. Schriempf ATTN: STEWS -TE-AN, A. De La Paz ATTN: Code 6610, R. Marlow USA MATTN: Code 2627USA
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
What to do with a Dead Research Code
NASA Astrophysics Data System (ADS)
Nemiroff, Robert J.
2016-01-01
The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Long distance quantum communication with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team
We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
Methodology, status and plans for development and assessment of Cathare code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less
Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.
Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A
2010-01-01
Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
47 CFR 15.214 - Cordless telephones.
Code of Federal Regulations, 2010 CFR
2010-10-01
... discrete digital codes. Factory-set codes must be continuously varied over at least 256 possible codes as... readily select from among at least 256 possible discrete digital codes. The cordless telephone shall be... fixed code that is continuously varied among at least 256 discrete digital codes as each telephone is...
28 CFR 36.608 - Guidance concerning model codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a... relevant model code and issue guidance concerning whether and in what respects the model code is consistent...
Coded mask telescopes for X-ray astronomy
NASA Astrophysics Data System (ADS)
Skinner, G. K.; Ponman, T. J.
1987-04-01
The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
Jones, Lyell K; Ney, John P
2016-12-01
Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, DoJun; Lin, Shu
1997-01-01
In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.
3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Velkov, K.; Lizorkin, M.
1997-07-01
This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Processor-in-memory-and-storage architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik
A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code wordmore » is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.« less
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-01
Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831
Practical guide to bar coding for patient medication safety.
Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara
2003-04-15
Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Austregesilo, H.; Velkov, K.
1997-07-01
The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
McKenzie, Kirsten; Walker, Sue; Tong, Shilu
It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
QR code for medical information uses.
Fontelo, Paul; Liu, Fang; Ducut, Erick G
2008-11-06
We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar
2010-04-01
Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.
1997-04-01
DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Concatenated coding for low date rate space communications.
NASA Technical Reports Server (NTRS)
Chen, C. H.
1972-01-01
In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.
New CPT codes: hospital, consultation, emergency and nursing facility services.
Zuber, T J; Henley, D E
1992-03-01
New evaluation and management codes were created by the Current Procedural Terminology (CPT) Editorial Panel to ensure more accurate and consistent reporting of physician services. The new hospital inpatient codes describe three levels of service for both initial and subsequent care. Critical care services are reported according to the total time spent by a physician providing constant attention to a critically ill patient. Consultation codes are divided into four categories: office/outpatient, initial inpatient, follow-up inpatient and confirmatory. Emergency department services for both new and established patients are limited to five codes. In 1992, nursing facility services are described with either comprehensive-assessment codes or subsequent-care codes. Hospital discharge services may be reported in addition to the comprehensive nursing facility assessment. Since the 1992 CPT book will list only the new codes, and since all insurance carriers will not be using these codes in 1992, physicians are encouraged to keep their 1991 code books and contact their local insurance carriers to determine which codes will be used.
Current Research on Non-Coding Ribonucleic Acid (RNA).
Wang, Jing; Samuels, David C; Zhao, Shilin; Xiang, Yu; Zhao, Ying-Yong; Guo, Yan
2017-12-05
Non-coding ribonucleic acid (RNA) has without a doubt captured the interest of biomedical researchers. The ability to screen the entire human genome with high-throughput sequencing technology has greatly enhanced the identification, annotation and prediction of the functionality of non-coding RNAs. In this review, we discuss the current landscape of non-coding RNA research and quantitative analysis. Non-coding RNA will be categorized into two major groups by size: long non-coding RNAs and small RNAs. In long non-coding RNA, we discuss regular long non-coding RNA, pseudogenes and circular RNA. In small RNA, we discuss miRNA, transfer RNA, piwi-interacting RNA, small nucleolar RNA, small nuclear RNA, Y RNA, single recognition particle RNA, and 7SK RNA. We elaborate on the origin, detection method, and potential association with disease, putative functional mechanisms, and public resources for these non-coding RNAs. We aim to provide readers with a complete overview of non-coding RNAs and incite additional interest in non-coding RNA research.
Quantum computing with Majorana fermion codes
NASA Astrophysics Data System (ADS)
Litinski, Daniel; von Oppen, Felix
2018-05-01
We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.
Coding in Stroke and Other Cerebrovascular Diseases.
Korb, Pearce J; Jones, William
2017-02-01
Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.
The Evolution and Expression Pattern of Human Overlapping lncRNA and Protein-coding Gene Pairs.
Ning, Qianqian; Li, Yixue; Wang, Zhen; Zhou, Songwen; Sun, Hong; Yu, Guangjun
2017-03-27
Long non-coding RNA overlapping with protein-coding gene (lncRNA-coding pair) is a special type of overlapping genes. Protein-coding overlapping genes have been well studied and increasing attention has been paid to lncRNAs. By studying lncRNA-coding pairs in human genome, we showed that lncRNA-coding pairs were more likely to be generated by overprinting and retaining genes in lncRNA-coding pairs were given higher priority than non-overlapping genes. Besides, the preference of overlapping configurations preserved during evolution was based on the origin of lncRNA-coding pairs. Further investigations showed that lncRNAs promoting the splicing of their embedded protein-coding partners was a unilateral interaction, but the existence of overlapping partners improving the gene expression was bidirectional and the effect was decreased with the increased evolutionary age of genes. Additionally, the expression of lncRNA-coding pairs showed an overall positive correlation and the expression correlation was associated with their overlapping configurations, local genomic environment and evolutionary age of genes. Comparison of the expression correlation of lncRNA-coding pairs between normal and cancer samples found that the lineage-specific pairs including old protein-coding genes may play an important role in tumorigenesis. This work presents a systematically comprehensive understanding of the evolution and the expression pattern of human lncRNA-coding pairs.
Circular codes revisited: a statistical approach.
Gonzalez, D L; Giannerini, S; Rosa, R
2011-04-21
In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...
Critical Care Coding for Neurologists.
Nuwer, Marc R; Vespa, Paul M
2015-10-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
2016-02-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy
ERIC Educational Resources Information Center
Hutchison, Amy; Nadolny, Larysa; Estapa, Anne
2016-01-01
In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…
Understanding Mixed Code and Classroom Code-Switching: Myths and Realities
ERIC Educational Resources Information Center
Li, David C. S.
2008-01-01
Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…
17 CFR 229.406 - (Item 406) Code of ethics.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...
VeryVote: A Voter Verifiable Code Voting System
NASA Astrophysics Data System (ADS)
Joaquim, Rui; Ribeiro, Carlos; Ferreira, Paulo
Code voting is a technique used to address the secure platform problem of remote voting. A code voting system consists in secretly sending, e.g. by mail, code sheets to voters that map their choices to entry codes in their ballot. While voting, the voter uses the code sheet to know what code to enter in order to vote for a particular candidate. In effect, the voter does the vote encryption and, since no malicious software on the PC has access to the code sheet it is not able to change the voter’s intention. However, without compromising the voter’s privacy, the vote codes are not enough to prove that the vote is recorded and counted as cast by the election server.
Locality-preserving logical operators in topological stabilizer codes
NASA Astrophysics Data System (ADS)
Webster, Paul; Bartlett, Stephen D.
2018-01-01
Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
Santos, José; Monteagudo, Ángel
2017-03-27
The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Ethical and educational considerations in coding hand surgeries.
Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E
2014-07-01
To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Production code control system for hydrodynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slone, D.M.
1997-08-18
We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
A novel neutron energy spectrum unfolding code using particle swarm optimization
NASA Astrophysics Data System (ADS)
Shahabinejad, H.; Sohrabpour, M.
2017-07-01
A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Truncation Depth Rule-of-Thumb for Convolutional Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
Quantized phase coding and connected region labeling for absolute phase retrieval.
Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian
2016-12-12
This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.
Degenerate quantum codes and the quantum Hamming bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep; Klappenecker, Andreas
2010-03-15
The parameters of a nondegenerate quantum code must obey the Hamming bound. An important open problem in quantum coding theory is whether the parameters of a degenerate quantum code can violate this bound for nondegenerate quantum codes. In this article we show that Calderbank-Shor-Steane (CSS) codes, over a prime power alphabet q{>=}5, cannot beat the quantum Hamming bound. We prove a quantum version of the Griesmer bound for the CSS codes, which allows us to strengthen the Rains' bound that an [[n,k,d
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-20
... (Code 324), Field Border (Code 386), Filter Strip (Code 393), Land Smoothing (Code 466), Livestock... the implementation requirement document to the specifications and plans. Filter Strip (Code 393)--The...
Error-Detecting Identification Codes for Algebra Students.
ERIC Educational Resources Information Center
Sutherland, David C.
1990-01-01
Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
25 CFR 18.101 - May a tribe create and adopt its own tribal probate code?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe create and adopt its own tribal probate code... PROBATE CODES Approval of Tribal Probate Codes § 18.101 May a tribe create and adopt its own tribal probate code? Yes. A tribe may create and adopt a tribal probate code. ...
Automated Coding Software: Development and Use to Enhance Anti-Fraud Activities*
Garvin, Jennifer H.; Watzlaf, Valerie; Moeini, Sohrab
2006-01-01
This descriptive research project identified characteristics of automated coding systems that have the potential to detect improper coding and to minimize improper or fraudulent coding practices in the setting of automated coding used with the electronic health record (EHR). Recommendations were also developed for software developers and users of coding products to maximize anti-fraud practices. PMID:17238546
ERIC Educational Resources Information Center
Meadows, William C.
2011-01-01
Interest in North American Indian code talkers continues to increase. In addition to numerous works about the Navajo code talkers, several publications on other groups of Native American code talkers--including the Choctaw, Comanche, Hopi, Meskwaki, Canadian Cree--and about code talkers in general have appeared. This article chronicles recent…
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Gear Codes, Descriptions, and Use 15 Table... ALASKA Pt. 679, Table 15 Table 15 to Part 679—Gear Codes, Descriptions, and Use Gear Codes, Descriptions, and Use (X indicates where this code is used) Name of gear Use alphabetic code to complete the...
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
Visual pattern image sequence coding
NASA Technical Reports Server (NTRS)
Silsbee, Peter; Bovik, Alan C.; Chen, Dapang
1990-01-01
The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Coding for urologic office procedures.
Dowling, Robert A; Painter, Mark
2013-11-01
This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
On fuzzy semantic similarity measure for DNA coding.
Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin
2016-02-01
A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.
2012-01-01
Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch. PMID:22773886
An international survey of building energy codes and their implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Roshchanka, Volha; Graham, Peter
Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less