Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-08
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B
2012-04-09
Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening
NASA Technical Reports Server (NTRS)
Dolinar, Sam
2005-01-01
We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
Construction of a new regular LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng
2013-05-01
A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Fast QC-LDPC code for free space optical communication
NASA Astrophysics Data System (ADS)
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport.
Zou, Ding; Djordjevic, Ivan B
2015-06-01
In this paper, we propose a soft-decision-based FEC scheme that is the concatenation of a non-binary LDPC code and hard-decision FEC code. The proposed NB-LDPC + RS with overhead of 27.06% provides a superior NCG of 11.9dB at a post-FEC BER of 10-15. As a result, the proposed NB-LDPC codes represent the strong FEC candidate of soft-decision FEC for beyond 100Gb/s optical transmission systems.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
Optical LDPC decoders for beyond 100 Gbits/s optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2009-05-01
We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Low-Density Parity-Check (LDPC) Codes Constructed from Protographs
NASA Astrophysics Data System (ADS)
Thorpe, J.
2003-08-01
We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry
2014-05-29
its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
ERIC Educational Resources Information Center
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.
Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko
2008-08-18
Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.
Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted
2010-09-13
We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).
Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks
2010-08-31
Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Error floor behavior study of LDPC codes for concatenated codes design
NASA Astrophysics Data System (ADS)
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
NASA Astrophysics Data System (ADS)
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Entanglement-assisted quantum quasicyclic low-density parity-check codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
An efficient decoding for low density parity check codes
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie
2009-12-01
Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-07-07
Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Unitals and ovals of symmetric block designs in LDPC and space-time coding
NASA Astrophysics Data System (ADS)
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
A Scalable Architecture of a Structured LDPC Decoder
NASA Technical Reports Server (NTRS)
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis
2008-01-01
Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.
Djordjevic, Ivan B
2007-04-02
The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
NASA Astrophysics Data System (ADS)
Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki
We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Revathy, M.; Saravanan, R.
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Short-Block Protograph-Based LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
LDPC Codes--Structural Analysis and Decoding Techniques
ERIC Educational Resources Information Center
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHERTKOV, MICHAEL; STEPANOV, MIKHAIL
2007-01-10
The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
Strategic and Tactical Decision-Making Under Uncertainty
2006-01-03
message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18
NASA Astrophysics Data System (ADS)
Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.
2017-01-01
In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.
Pilotless Frame Synchronization Using LDPC Code Constraints
NASA Technical Reports Server (NTRS)
Jones, Christopher; Vissasenor, John
2009-01-01
A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
Design and implementation of a channel decoder with LDPC code
NASA Astrophysics Data System (ADS)
Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan
2008-12-01
Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.
Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian
2016-05-30
Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.
On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Xu, Lei; Wang, Ting
2010-12-01
Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
LDPC product coding scheme with extrinsic information for bit patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Statistical physics inspired energy-efficient coded-modulation for optical communications.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2012-04-15
Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America
Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.
Djordjevic, Ivan B
2010-05-01
I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.
On the optimum signal constellation design for high-speed optical transport networks.
Liu, Tao; Djordjevic, Ivan B
2012-08-27
In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.
Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B
2017-10-15
We experimentally demonstrate self-adaptive coded 5×100 Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Pryadko, Leonid
Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.
Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K
2017-12-04
Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.
An LDPC Decoder Architecture for Wireless Sensor Network Applications
Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724
An LDPC decoder architecture for wireless sensor network applications.
Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
Qu, Zhen; Djordjevic, Ivan B
2017-08-15
We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.
An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai
Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes
NASA Astrophysics Data System (ADS)
Su, Hualing; He, Yucheng; Zhou, Lin
2017-08-01
In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
Nonlinear Demodulation and Channel Coding in EBPSK Scheme
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2009-01-01
Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
NASA Astrophysics Data System (ADS)
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs
NASA Astrophysics Data System (ADS)
Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong
2016-12-01
In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.
MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot
NASA Astrophysics Data System (ADS)
Daoud, Omar; Alani, Omar
This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Measurement Techniques for Clock Jitter
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Schlesinger, Adam
2012-01-01
NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.
Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage
NASA Technical Reports Server (NTRS)
Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.
2010-01-01
The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Accumulate-Repeat-Accumulate-Accumulate Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy
2007-01-01
Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)
2014-10-01
SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test
500 Gb/s free-space optical transmission over strong atmospheric turbulence channels.
Qu, Zhen; Djordjevic, Ivan B
2016-07-15
We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh
This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.
Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations
2017-11-14
1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave
Research on Formation of Microsatellite Communication with Genetic Algorithm
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796
Research on formation of microsatellite communication with genetic algorithm.
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
NASA Astrophysics Data System (ADS)
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
NASA Astrophysics Data System (ADS)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Finite-connectivity spin-glass phase diagrams and low-density parity check codes.
Migliorini, Gabriele; Saad, David
2006-02-01
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
NASA Astrophysics Data System (ADS)
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors
2012-02-29
filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider
Joint Carrier-Phase Synchronization and LDPC Decoding
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban
2009-01-01
A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine and sine of the carrier phase cc as well as correlated noise components. These products would be fed as inputs to a digital PLL that would include a number-controlled oscillator (NCO), which provides an estimate of the carrier phase, theta(sub c).
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
System on a Chip Real-Time Emulation (SOCRE)
2006-09-01
code ) i Table of Contents Preface...emulation platform included LDPC decoders, A/V and radio applications Port BEE flow to Emulation Platforms, SOC Technologies One of the key tasks of the...Once the design has been described within Simulink, the designer runs the BEE design flow within Matlab using the bee_xps interface. At this point
NASA Astrophysics Data System (ADS)
Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.
2015-03-01
We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
High performance and cost effective CO-OFDM system aided by polar code.
Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng
2017-02-06
A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.
Performance of Low-Density Parity-Check Coded Modulation
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.
Future capabilities for the Deep Space Network
NASA Technical Reports Server (NTRS)
Berner, J. B.; Bryant, S. H.; Andrews, K. S.
2004-01-01
This paper will look at three new capabilities that are in different stages of development. First, turbo decoding, which provides improved telemetry performance for data rates up to about 1 Mbps, will be discussed. Next, pseudo-noise ranging will be presented. Pseudo-noise ranging has several advantages over the current sequential ranging, anmely easier operations, improved performance, and the capability to be used in a regenerative implementation on a spacecraft. Finally, Low Density Parity Check decoding will be discussed. LDPC codes can provide performance that matches or slightly exceed turbo codes, but are designed for use in the 10 Mbps range.
16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.
Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S
2012-05-21
We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.
Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting
2010-06-21
In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).
Frame Synchronization Without Attached Sync Markers
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2011-01-01
We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).
NASA Astrophysics Data System (ADS)
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
NASA Astrophysics Data System (ADS)
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
High-Performance CCSDS AOS Protocol Implementation in FPGA
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson
2010-01-01
The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.
SCaN Network Ground Station Receiver Performance for Future Service Support
NASA Technical Reports Server (NTRS)
Estabrook, Polly; Lee, Dennis; Cheng, Michael; Lau, Chi-Wung
2012-01-01
Objectives: Examine the impact of providing the newly standardized CCSDS Low Density Parity Check (LDPC) codes to the SCaN return data service on the SCaN SN and DSN ground stations receivers: SN Current Receiver: Integrated Receiver (IR). DSN Current Receiver: Downlink Telemetry and Tracking (DTT) Receiver. Early Commercial-Off-The-Shelf (COTS) prototype of the SN User Service Subsystem Component Replacement (USS CR) Narrow Band Receiver. Motivate discussion of general issues of ground station hardware design to enable simple and cheap modifications for support of future services.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Mechanisms of lectin and antibody-dependent polymorphonuclear leukocyte-mediated cytolysis.
Tsunawaki, S; Ikenami, M; Mizuno, D; Yamazaki, M
1983-04-01
The mechanisms of tumor lysis by polymorphonuclear leukocytes (PMNs) were investigated. In antibody-dependent PMN-mediated cytolysis (ADPC), sensitized tumor cells were specifically lysed via Fc receptors on PMNs. On the other hand, lectin-dependent PMN-mediated cytolysis (LDPC) caused nonspecific lysis of several murine tumors after recognition of carbohydrate moieties on the cell membrane of both PMNs and tumor cells. Both ADPC and LDPC depended on glycolysis, and cytotoxicity was mediated by reactive oxygen species; LDPC was dependent on superoxide and ADPC on the myeloperoxidase system. The participation of reactive oxygen species in PMN cytotoxicity was also demonstrated by pharmacological triggering with phorbol myristate acetate. These results indicate that reactive oxygen species have an important role In tumor killing by PMNs and that ADPC and LDPC have partly different cytolytic processes as well as different recognition steps.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
NASA Tech Briefs, October 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.
Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication
NASA Astrophysics Data System (ADS)
Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn
2014-11-01
Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.
25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.
Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S
2013-01-28
We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin
2016-10-01
An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
NASA Astrophysics Data System (ADS)
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Characterization of vibrissa germinative cells: transition of cell types.
Osada, A; Kobayashi, K
2001-12-01
Germinative cells, small cell masses attached to the stalks of dermal papillae that are able to differentiate into the hair shaft and inner root sheath, form follicular bulb-like structures when co-cultured with dermal papilla cells. We studied the growth characteristics of germinative cells to determine the cell types in the vibrissa germinative tissue. Germinative tissues, attaching to dermal papillae, were cultured on 3T3 feeder layers. The cultured keratinocytes were harvested and transferred, equally and for two passages, onto lined dermal papilla cells (LDPC) and/or 3T3 feeder layers. The resulting germinative cells were classified into three types in the present experimental condition. Type 1 cells grow very well on either feeder layer, whereas Type 3 cells scarcely grow on either feeder layer. Type 2 cells are very conspicuous and are reversible. They grow well on 3T3 but growth is suppressed on LDPC feeder layers. The Type 2 cells that grow well on 3T3 feeder layers, however, are suppressed when transferred onto LDPC and the Type 2 cells that are suppressed on LDPC begin to grow again on 3T3. The transition of one cell type to another in vitro and the cell types that these germinative cell types correspond to in vivo is discussed. It was concluded that stem cells or their close progenitors reside in the germinative tissues of the vibrissa bulb except at late anagen-early catagen.
Djordjevic, Ivan B
2010-04-12
The Bell states preparation circuit is a basic circuit required in quantum teleportation. We describe how to implement it in all-fiber technology. The basic building blocks for its implementation are directional couplers and highly nonlinear optical fiber (HNLF). Because the quantum information processing is based on delicate superposition states, it is sensitive to quantum errors. In order to enable fault-tolerant quantum computing the use of quantum error correction is unavoidable. We show how to implement in all-fiber technology encoders and decoders for sparse-graph quantum codes, and provide an illustrative example to demonstrate this implementation. We also show that arbitrary set of universal quantum gates can be implemented based on directional couplers and HNLFs.
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Code of Federal Regulations, 2012 CFR
2012-10-01
... evidence that the structure was not in compliance with the building code at the time it was built; and (2... evidence that the structure was not in compliance with the building code at the time it was built; and (2... structure and found no evidence that the structure was not in compliance with the building code at the time...
Code of Federal Regulations, 2014 CFR
2014-10-01
... evidence that the structure was not in compliance with the building code at the time it was built; and (2... evidence that the structure was not in compliance with the building code at the time it was built; and (2... structure and found no evidence that the structure was not in compliance with the building code at the time...
Code of Federal Regulations, 2011 CFR
2011-10-01
... evidence that the structure was not in compliance with the building code at the time it was built; and (2... evidence that the structure was not in compliance with the building code at the time it was built; and (2... structure and found no evidence that the structure was not in compliance with the building code at the time...
Code of Federal Regulations, 2013 CFR
2013-10-01
... evidence that the structure was not in compliance with the building code at the time it was built; and (2... evidence that the structure was not in compliance with the building code at the time it was built; and (2... structure and found no evidence that the structure was not in compliance with the building code at the time...
Code of Federal Regulations, 2010 CFR
2010-10-01
... evidence that the structure was not in compliance with the building code at the time it was built; and (2... evidence that the structure was not in compliance with the building code at the time it was built; and (2... structure and found no evidence that the structure was not in compliance with the building code at the time...
Direct-detection Free-space Laser Transceiver Test-bed
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Chen, Jeffrey R.; Dabney, Philip W.; Ferrara, Jeffrey F.; Fong, Wai H.; Martino, Anthony J.; McGarry Jan. F.; Merkowitz, Stephen M.; Principe, Caleb M.; Sun, Siaoli;
2008-01-01
NASA Goddard Space Flight Center is developing a direct-detection free-space laser communications transceiver test bed. The laser transmitter is a master-oscillator power amplifier (MOPA) configuration using a 1060 nm wavelength laser-diode with a two-stage multi-watt Ytterbium fiber amplifier. Dual Mach-Zehnder electro-optic modulators provide an extinction ratio greater than 40 dB. The MOPA design delivered 10-W average power with low-duty-cycle PPM waveforms and achieved 1.7 kW peak power. We use pulse-position modulation format with a pseudo-noise code header to assist clock recovery and frame boundary identification. We are examining the use of low-density-parity-check (LDPC) codes for forward error correction. Our receiver uses an InGaAsP 1 mm diameter photocathode hybrid photomultiplier tube (HPMT) cooled with a thermo-electric cooler. The HPMT has 25% single-photon detection efficiency at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius and a single-photon impulse response of 0.9 ns. We report on progress toward demonstrating a combined laser communications and ranging field experiment.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.
PPLN-waveguide-based polarization entangled QKD simulator
NASA Astrophysics Data System (ADS)
Gariano, John; Djordjevic, Ivan B.
2017-08-01
We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... for Residential Construction in High Wind Regions. ICC 700: National Green Building Standard The..., coordinated, and necessary to regulate the built environment. Federal agencies frequently use these codes and... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... for Residential Construction in High Wind Areas. ICC 700: National Green Building Standard. The... Codes and Standards that are comprehensive, coordinated, and necessary to regulate the built environment... International Codes and Standards consist of the following: ICC Codes International Building Code. International...
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKisson, John
The source code for the Java Data Acquisition suite provides interfaces to the JLab built USB FPGA ADC across a LAN network. Each jDaq node provides ListMode data from JLab built detector systems and readouts.
SolTrace Background | Concentrating Solar Power | NREL
codes was written to model a very specific optical geometry, and each one built upon the others in an evolutionary way. Examples of such codes include: OPTDSH, a code written to model circular aperture parabolic
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Radiative transfer code SHARM for atmospheric and terrestrial applications.
Lyapustin, A I
2005-12-20
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Coded mask telescopes for X-ray astronomy
NASA Astrophysics Data System (ADS)
Skinner, G. K.; Ponman, T. J.
1987-04-01
The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.
Landsat Data Continuity Mission (LDCM) space to ground mission data architecture
Nelson, Jack L.; Ames, J.A.; Williams, J.; Patschke, R.; Mott, C.; Joseph, J.; Garon, H.; Mah, G.
2012-01-01
The Landsat Data Continuity Mission (LDCM) is a scientific endeavor to extend the longest continuous multi-spectral imaging record of Earth's land surface. The observatory consists of a spacecraft bus integrated with two imaging instruments; the Operational Land Imager (OLI), built by Ball Aerospace & Technologies Corporation in Boulder, Colorado, and the Thermal Infrared Sensor (TIRS), an in-house instrument built at the Goddard Space Flight Center (GSFC). Both instruments are integrated aboard a fine-pointing, fully redundant, spacecraft bus built by Orbital Sciences Corporation, Gilbert, Arizona. The mission is scheduled for launch in January 2013. This paper will describe the innovative end-to-end approach for efficiently managing high volumes of simultaneous realtime and playback of image and ancillary data from the instruments to the reception at the United States Geological Survey's (USGS) Landsat Ground Network (LGN) and International Cooperator (IC) ground stations. The core enabling capability lies within the spacecraft Command and Data Handling (C&DH) system and Radio Frequency (RF) communications system implementation. Each of these systems uniquely contribute to the efficient processing of high speed image data (up to 265Mbps) from each instrument, and provide virtually error free data delivery to the ground. Onboard methods include a combination of lossless data compression, Consultative Committee for Space Data Systems (CCSDS) data formatting, a file-based/managed Solid State Recorder (SSR), and Low Density Parity Check (LDPC) forward error correction. The 440 Mbps wideband X-Band downlink uses Class 1 CCSDS File Delivery Protocol (CFDP), and an earth coverage antenna to deliver an average of 400 scenes per day to a combination of LGN and IC ground stations. This paper will also describe the integrated capabilities and processes at the LGN ground stations for data reception using adaptive filtering, and the mission operations approach fro- the LDCM Mission Operations Center (MOC) to perform the CFDP accounting, file retransmissions, and management of the autonomous features of the SSR.
38 CFR 4.27 - Use of diagnostic code numbers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Use of diagnostic code... FOR RATING DISABILITIES General Policy in Rating § 4.27 Use of diagnostic code numbers. The diagnostic... residual condition is encountered, requiring rating by analogy, the diagnostic code number will be “built...
38 CFR 4.27 - Use of diagnostic code numbers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Use of diagnostic code... FOR RATING DISABILITIES General Policy in Rating § 4.27 Use of diagnostic code numbers. The diagnostic... residual condition is encountered, requiring rating by analogy, the diagnostic code number will be “built...
28 CFR 36.606 - Effect of certification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... buildings and facilities that comply with the certified code. A submitting official may reapply for...
28 CFR 36.606 - Effect of certification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... buildings and facilities that comply with the certified code. A submitting official may reapply for...
28 CFR 36.606 - Effect of certification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... buildings and facilities that comply with the certified code. A submitting official may reapply for...
28 CFR 36.606 - Effect of certification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... buildings and facilities that comply with the certified code. A submitting official may reapply for...
28 CFR 36.607 - Effect of certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... equivalency only with respect to those features or elements that are both covered by the certified code and...
Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation.
Djordjevic, Ivan B
2011-07-18
In order to achieve multi-gigabit transmission (projected for 2020) for the use in interplanetary communications, the usage of large number of time slots in pulse-position modulation (PPM), typically used in deep-space applications, is needed, which imposes stringent requirements on system design and implementation. As an alternative satisfying high-bandwidth demands of future interplanetary communications, while keeping the system cost and power consumption reasonably low, in this paper, we describe the use of orbital angular momentum (OAM) as an additional degree of freedom. The OAM is associated with azimuthal phase of the complex electric field. Because OAM eigenstates are orthogonal the can be used as basis functions for N-dimensional signaling. The OAM modulation and multiplexing can, therefore, be used, in combination with other degrees of freedom, to solve the high-bandwidth requirements of future deep-space and near-Earth optical communications. The main challenge for OAM deep-space communication represents the link between a spacecraft probe and the Earth station because in the presence of atmospheric turbulence the orthogonality between OAM states is no longer preserved. We will show that in combination with LDPC codes, the OAM-based modulation schemes can operate even under strong atmospheric turbulence regime. In addition, the spectral efficiency of proposed scheme is N2/log2N times better than that of PPM.
Multilevel built environment features and individual odds of overweight and obesity in Utah
Xu, Yanqing; Wen, Ming; Wang, Fahui
2015-01-01
Based on the data from the Behavioral Risk Factor Surveillance System (BRFSS) in 2007, 2009 and 2011 in Utah, this research uses multilevel modeling (MLM) to examine the associations between neighborhood built environments and individual odds of overweight and obesity after controlling for individual risk factors. The BRFSS data include information on 21,961 individuals geocoded to zip code areas. Individual variables include BMI (body mass index) and socio-demographic attributes such as age, gender, race, marital status, education attainment, employment status, and whether an individual smokes. Neighborhood built environment factors measured at both zip code and county levels include street connectivity, walk score, distance to parks, and food environment. Two additional neighborhood variables, namely the poverty rate and urbanicity, are also included as control variables. MLM results show that at the zip code level, poverty rate and distance to parks are significant and negative covariates of the odds of overweight and obesity; and at the county level, food environment is the sole significant factor with stronger fast food presence linked to higher odds of overweight and obesity. These findings suggest that obesity risk factors lie in multiple neighborhood levels and built environment features need to be defined at a neighborhood size relevant to residents' activity space. PMID:26251559
Automated Diagnosis Coding with Combined Text Representations.
Berndorfer, Stefan; Henriksson, Aron
2017-01-01
Automated diagnosis coding can be provided efficiently by learning predictive models from historical data; however, discriminating between thousands of codes while allowing a variable number of codes to be assigned is extremely difficult. Here, we explore various text representations and classification models for assigning ICD-9 codes to discharge summaries in MIMIC-III. It is shown that the relative effectiveness of the investigated representations depends on the frequency of the diagnosis code under consideration and that the best performance is obtained by combining models built using different representations.
Numerical ‘health check’ for scientific codes: the CADNA approach
NASA Astrophysics Data System (ADS)
Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.
2007-04-01
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Robb; Butterfield, Karla
With funding from the Building America Program, part of the U.S. Department of Energy Building Technologies Office, the Consortium for Advanced Residential Buildings (CARB) worked with BrightBuilt Home (BBH) to evaluate and optimize building systems. CARB’s work focused on a home built by Black Bros. Builders in Lincolnville, Maine (International Energy Conservation Code Climate Zone 6). As with most BBH projects to date, modular boxes were built by Keiser Homes in Oxford, Maine.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)
NASA Technical Reports Server (NTRS)
Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.
2012-01-01
The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
NASA Astrophysics Data System (ADS)
Markman, A.; Javidi, B.
2016-06-01
Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.
Low-dose cardio-respiratory phase-correlated cone-beam micro-CT of small animals.
Sawall, Stefan; Bergner, Frank; Lapp, Robert; Mronz, Markus; Karolczak, Marek; Hess, Andreas; Kachelriess, Marc
2011-03-01
Micro-CT imaging of animal hearts typically requires a double gating procedure because scans during a breath-hold are not possible due to the long scan times and the high respiratory rates, Simultaneous respiratory and cardiac gating can either be done prospectively or retrospectively. True five-dimensional information can be either retrieved with retrospective gating or with prospective gating if several prospective gates are acquired. In any case, the amount of information available to reconstruct one volume for a given respiratory and cardiac phase is orders of magnitud lower than the total amount of information acquired. For example, the reconstruction of a volume from a 10% wide respiratory and a 20% wide cardiac window uses only 2% of the data acquired. Achieving a similar image quality as a nongated scan would therefore require to increase the amount of data and thereby the dose to the animal by up to a factor of 50. To achieve the goal of low-dose phase-correlated (LDPC) imaging, the authors propose to use a highly efficient combination of slightly modified existing algorithms. In particular, the authors developed a variant of the McKinnon-Bates image reconstruction algorithm and combined it with bilateral filtering in up to five dimensions to significantly reduce image noise without impairing spatial or temporal resolution. The preliminary results indicate that the proposed LDPC reconstruction method typically reduces image noise by a factor of up to 6 (e.g., from 170 to 30 HU), while the dose values lie in a range from 60 to 500 mGy. Compared to other publications that apply 250-1800 mGy for the same task [C. T. Badea et al., "4D micro-CT of the mouse heart," Mol. Imaging 4(2), 110-116 (2005); M. Drangova et al., "Fast retrospectively gated quantitative four-dimensional (4D) cardiac micro computed tomography imaging of free-breathing mice," Invest. Radiol. 42(2), 85-94 (2007); S. H. Bartling et al., "Retrospective motion gating in small animal CT of mice and rats," Invest. Radiol. 42(10), 704-714 (2007)], the authors' LDPC approach therefore achieves a more than tenfold dose usage improvement. The LDPC reconstruction method improves phase-correlated imaging from highly undersampled data. Artifacts caused by sparse angular sampling are removed and the image noise is decreased, while spatial and temporal resolution are preserved. Thus, the administered dose per animal can be decreased allowing for long-term studies with reduced metabolic inference.
NASA Astrophysics Data System (ADS)
Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.
2015-08-01
The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.
DVB-S2 Experiment over NASA's Space Network
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Evans, Michael A.; Tollis, Nicholas S.
2017-01-01
The commercial DVB-S2 standard was successfully demonstrated over NASAs Space Network (SN) and the Tracking Data and Relay Satellite System (TDRSS) during testing conducted September 20-22nd, 2016. This test was a joint effort between NASA Glenn Research Center (GRC) and Goddard Space Flight Center (GSFC) to evaluate the performance of DVB-S2 as an alternative to traditional NASA SN waveforms. Two distinct sets of tests were conducted: one was sourced from the Space Communication and Navigation (SCaN) Testbed, an external payload on the International Space Station, and the other was sourced from GRCs S-band ground station to emulate a Space Network user through TDRSS. In both cases, a commercial off-the-shelf (COTS) receiver made by Newtec was used to receive the signal at White Sands Complex. Using SCaN Testbed, peak data rates of 5.7 Mbps were demonstrated. Peak data rates of 33 Mbps were demonstrated over the GRC S-band ground station through a 10MHz channel over TDRSS, using 32-amplitude phase shift keying (APSK) and a rate 89 low density parity check (LDPC) code. Advanced features of the DVB-S2 standard were evaluated, including variable and adaptive coding and modulation (VCMACM), as well as an adaptive digital pre-distortion (DPD) algorithm. These features provided additional data throughput and increased link performance reliability. This testing has shown that commercial standards are a viable, low-cost alternative for future Space Network users.
28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...
28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
Nguyen, Quynh C; Sajjadi, Mehdi; McCullough, Matt; Pham, Minh; Nguyen, Thu T; Yu, Weijun; Meng, Hsien-Wen; Wen, Ming; Li, Feifei; Smith, Ken R; Brunisholz, Kim; Tasdizen, Tolga
2018-03-01
Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments. A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston. Convolutional neural networks were used to create indicators of street greenness, crosswalks and building type. We implemented log Poisson regression models to estimate associations between built environment features and individual prevalence of obesity and diabetes in Salt Lake City, controlling for individual-level and zip code-level predisposing characteristics. Computer vision models had an accuracy of 86%-93% compared with manual annotations. Charleston had the highest percentage of green streets (79%), while Chicago had the highest percentage of crosswalks (23%) and commercial buildings/apartments (59%). Built environment characteristics were categorised into tertiles, with the highest tertile serving as the referent group. Individuals living in zip codes with the most green streets, crosswalks and commercial buildings/apartments had relative obesity prevalences that were 25%-28% lower and relative diabetes prevalences that were 12%-18% lower than individuals living in zip codes with the least abundance of these neighbourhood features. Neighbourhood conditions may influence chronic disease outcomes. Google Street View images represent an underused data resource for the construction of built environment features. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
CSlib, a library to couple codes via Client/Server messaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plimpton, Steve
The CSlib is a small, portable library which enables two (or more) independent simulation codes to be coupled, by exchanging messages with each other. Both codes link to the library when they are built, and can them communicate with each other as they run. The messages contain data or instructions that the two codes send back-and-forth to each other. The messaging can take place via files, sockets, or MPI. The latter is a standard distributed-memory message-passing library.
Final report on LDRD project : coupling strategies for multi-physics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian
Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveragedmore » existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.« less
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Energy Storage System Safety: Plan Review and Inspection Checklist
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Pam C.; Conover, David R.
Codes, standards, and regulations (CSR) governing the design, construction, installation, commissioning, and operation of the built environment are intended to protect the public health, safety, and welfare. While these documents change over time to address new technology and new safety challenges, there is generally some lag time between the introduction of a technology into the market and the time it is specifically covered in model codes and standards developed in the voluntary sector. After their development, there is also a timeframe of at least a year or two until the codes and standards are adopted. Until existing model codes andmore » standards are updated or new ones are developed and then adopted, one seeking to deploy energy storage technologies or needing to verify the safety of an installation may be challenged in trying to apply currently implemented CSRs to an energy storage system (ESS). The Energy Storage System Guide for Compliance with Safety Codes and Standards1 (CG), developed in June 2016, is intended to help address the acceptability of the design and construction of stationary ESSs, their component parts, and the siting, installation, commissioning, operations, maintenance, and repair/renovation of ESS within the built environment.« less
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
ASHMET: A computer code for estimating insolation incident on tilted surfaces
NASA Technical Reports Server (NTRS)
Elkin, R. F.; Toelle, R. G.
1980-01-01
A computer code, ASHMET, was developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors were included. Climatological data for 248 U. S. locations are built into the code. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
Micrometeoroid and Orbital Debris Threat Assessment: Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Hyde, James L.; Bjorkman, Michael D.; Hoffman, Kevin D.; Lear, Dana M.; Prior, Thomas G.
2011-01-01
This report provides results of a Micrometeoroid and Orbital Debris (MMOD) risk assessment of the Mars Sample Return Earth Entry Vehicle (MSR EEV). The assessment was performed using standard risk assessment methodology illustrated in Figure 1-1. Central to the process is the Bumper risk assessment code (Figure 1-2), which calculates the critical penetration risk based on geometry, shielding configurations and flight parameters. The assessment process begins by building a finite element model (FEM) of the spacecraft, which defines the size and shape of the spacecraft as well as the locations of the various shielding configurations. This model is built using the NX I-deas software package from Siemens PLM Software. The FEM is constructed using triangular and quadrilateral elements that define the outer shell of the spacecraft. Bumper-II uses the model file to determine the geometry of the spacecraft for the analysis. The next step of the process is to identify the ballistic limit characteristics for the various shield types. These ballistic limits define the critical size particle that will penetrate a shield at a given impact angle and impact velocity. When the finite element model is built, each individual element is assigned a property identifier (PID) to act as an index for its shielding properties. Using the ballistic limit equations (BLEs) built into the Bumper-II code, the shield characteristics are defined for each and every PID in the model. The final stage of the analysis is to determine the probability of no penetration (PNP) on the spacecraft. This is done using the micrometeoroid and orbital debris environment definitions that are built into the Bumper-II code. These engineering models take into account orbit inclination, altitude, attitude and analysis date in order to predict an impacting particle flux on the spacecraft. Using the geometry and shielding characteristics previously defined for the spacecraft and combining that information with the environment model calculations, the Bumper-II code calculates a probability of no penetration for the spacecraft.
Formal Verification of Digital Logic
1991-12-01
INVERT circuit was based upon VHDL code provided in the Zycad Reference Manual [32:Ch 10,73]. The other circuits were based upon VHtDL code written...HALFADD.PL /* This file implements a simple half-adder that * /* is built from inverters and 2 input nand gates. * /* It is based upon a Zycad VHDL file...It is based upon a Zycad VHDL file written by * /* Capt Dave Banton, which is attached below the * /* Prolog code . *load..in(primitive). %h get nor2
High-Speed Large-Alphabet Quantum Key Distribution Using Photonic Integrated Circuits
2014-01-28
polarizing beam splitter, TDC: time-to-digital converter. Extra&loss& photon/bin frame size QSER secure bpp ECC secure&key&rate& none& 0.0031 64 14...to-digital converter. photon/frame frame size QSER secure bpp ECC secure&key& rate& 1.3 16 9.5 % 2.9 layered LDPC 7.3&Mbps& Figure 24: Operating
78 FR 78819 - National Conference on Weights and Measures 99th Interim Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
... 44: General Code Item 310-2 G.S.5.6. Recorded Representations. A variety of commercial weighing and... protections for buyers and sellers alike. This item is a proposal to revise the General Code requirement to... high-precision balances that are typically used by precious metal and gem buyers have built balances...
Wu, F C; Zhang, H; Zhou, Q; Wu, M; Ballard, Z; Tian, Y; Wang, J Y; Niu, Z W; Huang, Y
2014-04-18
A method for site-specific and high yield modification of tobacco mosaic virus coat protein (TMVCP) utilizing a genetic code expanding technology and copper free cycloaddition reaction has been established, and biotin-functionalized virus-like particles were built by the self-assembly of the protein monomers.
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Fully-Implicit Navier-Stokes (FIN-S)
NASA Technical Reports Server (NTRS)
Kirk, Benjamin S.
2010-01-01
FIN-S is a SUPG finite element code for flow problems under active development at NASA Lyndon B. Johnson Space Center and within PECOS: a) The code is built on top of the libMesh parallel, adaptive finite element library. b) The initial implementation of the code targeted supersonic/hypersonic laminar calorically perfect gas flows & conjugate heat transfer. c) Initial extension to thermochemical nonequilibrium about 9 months ago. d) The technologies in FIN-S have been enhanced through a strongly collaborative research effort with Sandia National Labs.
Lyngstad, Pål
2016-01-01
Deregulation is on the political agenda in the European countries. The Norwegian building code related to universal design and accessibility is challenged. To meet this, the Norwegian Building Authority have chosen to examine established truths and are basing their revised code on scientific research and field tests. But will this knowledge-based deregulation comply within the framework of the anti-discrimination act and, and if not: who suffers and to what extent?
NASA Astrophysics Data System (ADS)
Vacca, G.; Pili, D.; Fiorino, D. R.; Pintus, V.
2017-05-01
The presented work is part of the research project, titled "Tecniche murarie tradizionali: conoscenza per la conservazione ed il miglioramento prestazionale" (Traditional building techniques: from knowledge to conservation and performance improvement), with the purpose of studying the building techniques of the 13th-18th centuries in the Sardinia Region (Italy) for their knowledge, conservation, and promotion. The end purpose of the entire study is to improve the performance of the examined structures. In particular, the task of the authors within the research project was to build a WebGIS to manage the data collected during the examination and study phases. This infrastructure was entirely built using Open Source software. The work consisted of designing a database built in PostgreSQL and its spatial extension PostGIS, which allows to store and manage feature geometries and spatial data. The data input is performed via a form built in HTML and PHP. The HTML part is based on Bootstrap, an open tools library for websites and web applications. The implementation of this template used both PHP and Javascript code. The PHP code manages the reading and writing of data to the database, using embedded SQL queries. As of today, we surveyed and archived more than 300 buildings, belonging to three main macro categories: fortification architectures, religious architectures, residential architectures. The masonry samples investigated in relation to the construction techniques are more than 150. The database is published on the Internet as a WebGIS built using the Leaflet Javascript open libraries, which allows creating map sites with background maps and navigation, input and query tools. This too uses an interaction of HTML, Javascript, PHP and SQL code.
Tomcat, Oracle & XML Web Archive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cothren, D. C.
2008-01-01
The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.
Nonlinear detection for a high rate extended binary phase shift keying system.
Chen, Xian-Qing; Wu, Le-Nan
2013-03-28
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.
Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System
Chen, Xian-Qing; Wu, Le-Nan
2013-01-01
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034
NASA Astrophysics Data System (ADS)
Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe
2015-05-01
Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.
A Repository of Codes of Ethics and Technical Standards in Health Informatics
Zaïane, Osmar R.
2014-01-01
We present a searchable repository of codes of ethics and standards in health informatics. It is built using state-of-the-art search algorithms and technologies. The repository will be potentially beneficial for public health practitioners, researchers, and software developers in finding and comparing ethics topics of interest. Public health clinics, clinicians, and researchers can use the repository platform as a one-stop reference for various ethics codes and standards. In addition, the repository interface is built for easy navigation, fast search, and side-by-side comparative reading of documents. Our selection criteria for codes and standards are two-fold; firstly, to maintain intellectual property rights, we index only codes and standards freely available on the internet. Secondly, major international, regional, and national health informatics bodies across the globe are surveyed with the aim of understanding the landscape in this domain. We also look at prevalent technical standards in health informatics from major bodies such as the International Standards Organization (ISO) and the U. S. Food and Drug Administration (FDA). Our repository contains codes of ethics from the International Medical Informatics Association (IMIA), the iHealth Coalition (iHC), the American Health Information Management Association (AHIMA), the Australasian College of Health Informatics (ACHI), the British Computer Society (BCS), and the UK Council for Health Informatics Professions (UKCHIP), with room for adding more in the future. Our major contribution is enhancing the findability of codes and standards related to health informatics ethics by compilation and unified access through the health informatics ethics repository. PMID:25422725
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
A site-specific approach for assessing the fire risk to structures at the wildland/urban interface
Jack Cohen
1991-01-01
The essence of the wildland/urban interface fire problem is the loss of homes. The problem is not new, but is becoming increasingly important as more homes with inadequate adherence to safety codes are built at the wildland/urban interface. Current regulatory codes are inflexible. Specifications for building and site characteristics cannot be adjusted to accommodate...
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.
Henry, R; Tiselj, I; Snoj, L
2015-03-01
New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Sternberg, Alex
The contact control code is a generalized force control scheme meant to interface with a robotic arm being controlled using the Robot Operating System (ROS). The code allows the user to specify a control scheme for each control dimension in a way that many different control task controllers could be built from the same generalized controller. The input to the code includes maximum velocity, maximum force, maximum displacement, and a control law assigned to each direction and the output is a 6 degree of freedom velocity command that is sent to the robot controller.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
NASA Tech Briefs, September 2009
NASA Technical Reports Server (NTRS)
2009-01-01
opics covered include: Filtering Water by Use of Ultrasonically Vibrated Nanotubes; Computer Code for Nanostructure Simulation; Functionalizing CNTs for Making Epoxy/CNT Composites; Improvements in Production of Single-Walled Carbon Nanotubes; Progress Toward Sequestering Carbon Nanotubes in PmPV; Two-Stage Variable Sample-Rate Conversion System; Estimating Transmitted-Signal Phase Variations for Uplink Array Antennas; Board Saver for Use with Developmental FPGAs; Circuit for Driving Piezoelectric Transducers; Digital Synchronizer without Metastability; Compact, Low-Overhead, MIL-STD-1553B Controller; Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM; Differential InP HEMT MMIC Amplifiers Embedded in Waveguides; Improved Aerogel Vacuum Thermal Insulation; Fluoroester Co-Solvents for Low-Temperature Li+ Cells; Using Volcanic Ash to Remove Dissolved Uranium and Lead; High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell; Silicon Wafer-Scale Substrate for Microshutters and Detector Arrays; Micro-Horn Arrays for Ultrasonic Impedance Matching; Improved Controller for a Three-Axis Piezoelectric Stage; Nano-Pervaporation Membrane with Heat Exchanger Generates Medical-Grade Water; Micro-Organ Devices; Nonlinear Thermal Compensators for WGM Resonators; Dynamic Self-Locking of an OEO Containing a VCSEL; Internal Water Vapor Photoacoustic Calibration; Mid-Infrared Reflectance Imaging of Thermal-Barrier Coatings; Improving the Visible and Infrared Contrast Ratio of Microshutter Arrays; Improved Scanners for Microscopic Hyperspectral Imaging; Rate-Compatible LDPC Codes with Linear Minimum Distance; PrimeSupplier Cross-Program Impact Analysis and Supplier Stability Indicator Simulation Model; Integrated Planning for Telepresence With Time Delays; Minimizing Input-to-Output Latency in Virtual Environment; Battery Cell Voltage Sensing and Balancing Using Addressable Transformers; Gaussian and Lognormal Models of Hurricane Gust Factors; Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft; Integrated Modeling of Spacecraft Touch-and-Go Sampling; Spacecraft Station-Keeping Trajectory and Mission Design Tools; Efficient Model-Based Diagnosis Engine; and DSN Simulator.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Designing to Meet New Requirements of Differing Services.
ERIC Educational Resources Information Center
Mathers, Andrew S.
1982-01-01
Characterizing "older library buildings" as those built prior to 1960, this article discusses special problems and challenges for the librarian and architect renovator, including building codes and new requirements of differing services. (EJS)
Flexible digital modulation and coding synthesis for satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John
1991-01-01
An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.
Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2008-01-01
A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.
2006-06-01
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
NASA Astrophysics Data System (ADS)
Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick
2013-06-01
The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.
The PLUTO code for astrophysical gasdynamics .
NASA Astrophysics Data System (ADS)
Mignone, A.
Present numerical codes appeal to a consolidated theory based on finite difference and Godunov-type schemes. In this context we have developed a versatile numerical code, PLUTO, suitable for the solution of high-mach number flow in 1, 2 and 3 spatial dimensions and different systems of coordinates. Different hydrodynamic modules and algorithms may be independently selected to properly describe Newtonian, relativistic, MHD, or relativistic MHD fluids. The modular structure exploits a general framework for integrating a system of conservation laws, built on modern Godunov-type shock-capturing schemes. The code is freely distributed under the GNU public license and it is available for download to the astrophysical community at the URL http://plutocode.to.astro.it.
Trace Replay and Network Simulation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acun, Bilge; Jain, Nikhil; Bhatele, Abhinav
2015-03-23
TraceR is a trace reply tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performances and understanding network behavior by simulating messaging in High Performance Computing applications on interconnection networks.
Trace Replay and Network Simulation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Nikhil; Bhatele, Abhinav; Acun, Bilge
TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.
Foundational numerical capacities and the origins of dyscalculia.
Butterworth, Brian
2010-12-01
One important cause of very low attainment in arithmetic (dyscalculia) seems to be a core deficit in an inherited foundational capacity for numbers. According to one set of hypotheses, arithmetic ability is built on an inherited system responsible for representing approximate numerosity. One account holds that this is supported by a system for representing exactly a small number (less than or equal to four4) of individual objects. In these approaches, the core deficit in dyscalculia lies in either of these systems. An alternative proposal holds that the deficit lies in an inherited system for sets of objects and operations on them (numerosity coding) on which arithmetic is built. I argue that a deficit in numerosity coding, not in the approximate number system or the small number system, is responsible for dyscalculia. Nevertheless, critical tests should involve both longitudinal studies and intervention, and these have yet to be carried out. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The country’s first Zero Energy Ready manufactured home that is certified by the U.S. Department of Energy (DOE) is up and running in Russellville, Alabama. The manufactured home was built by a partnership between Southern Energy Homes and the Advanced Residential Integrated Energy Solutions Collaborative (ARIES), which is a DOE Building America team. The effort was part of a three-home study including a standard-code manufactured home and an ENERGY STAR® manufactured home. Cooling-season results showed that the building used half the space-conditioning energy of a manufactured home built to the U.S. Department of Housing and Urban Development’s (HUD’s) Manufactured Homemore » Construction and Safety Standards. These standards are known collectively as the HUD Code, which is the building standard for all U.S. manufactured housing.« less
Seol, Bo Ram; Kim, Dong Myung; Park, Ki Ho; Jeoung, Jin Wook
2017-11-01
To evaluate the optical coherence tomography (OCT) color probability codes based on a myopic normative database and to investigate whether the implementation of the myopic normative database can improve the OCT diagnostic ability in myopic glaucoma. Comparative validity study. In this study, 305 eyes (154 myopic healthy eyes and 151 myopic glaucoma eyes) were included. A myopic normative database was obtained based on myopic healthy eyes. We evaluated the agreement between OCT color probability codes after applying the built-in and myopic normative databases, respectively. Another 120 eyes (60 myopic healthy eyes and 60 myopic glaucoma eyes) were included and the diagnostic performance of OCT color codes using a myopic normative database was investigated. The mean weighted kappa (Kw) coefficients for quadrant retinal nerve fiber layer (RNFL) thickness, clock-hour RNFL thickness, and ganglion cell-inner plexiform layer (GCIPL) thickness were 0.636, 0.627, and 0.564, respectively. The myopic normative database showed a higher specificity than did the built-in normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P < .001, P < .001, and P < .001, respectively). The receiver operating characteristic curve values increased when using the myopic normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P = .011, P = .004, P < .001, respectively). The diagnostic ability of OCT color codes for detection of myopic glaucoma significantly improved after application of the myopic normative database. The implementation of a myopic normative database is needed to allow more precise interpretation of OCT color probability codes when used in myopic eyes. Copyright © 2017 Elsevier Inc. All rights reserved.
The code base for creating versions of the USEEIO model and USEEIO-like models is called the USEEIO Modeling Framework. The framework is built in a combination of R and Python languages.This demonstration provides a brief overview and introduction into the framework.
Nonequilibrium chemistry boundary layer integral matrix procedure
NASA Technical Reports Server (NTRS)
Tong, H.; Buckingham, A. C.; Morse, H. L.
1973-01-01
The development of an analytic procedure for the calculation of nonequilibrium boundary layer flows over surfaces of arbitrary catalycities is described. An existing equilibrium boundary layer integral matrix code was extended to include nonequilibrium chemistry while retaining all of the general boundary condition features built into the original code. For particular application to the pitch-plane of shuttle type vehicles, an approximate procedure was developed to estimate the nonequilibrium and nonisentropic state at the edge of the boundary layer.
ATTRIBUTES OF FORM IN THE BUILT ENVIRONMENT THAT INFLUENCE PERCEIVED WALKABILITY.
Oreskovic, Nicolas M; Charles, Pablina Roth Suzanne Lanyi; Shepherd, Dido Tsigaridi Kathrine; Nelson, Kerrie P; Bar, Moshe
2014-01-01
A recent focus of design and building regulations, including form-based codes and the Leadership in Energy and Environmental Design for Neighborhood Development rating system, has been on promoting pedestrian activity. This study assessed perceptions of walkability for residential and commercial streetscapes with different design attributes in order to inform form-based regulations and codes that aim to impact walkability. We scored 424 images on four design attributes purported to influence walkability: variation in building height, variation in building plane, presence of ground-floor windows, and presence of a street focal point. We then presented the images to 45 adults, who were asked to rate the images for walkability. The results showed that perceived walkability varied according to the degree to which a particular design attribute was present, with the presence of ground-floor windows and a street focal point most consistently associated with a space's perceived walkability. Understanding if and which design attributes are most related to walkability could allow planners and developers to focus on the most salient built-environment features influencing physical activity, as well as provide empirical scientific evidence for form-based regulations and zoning codes aimed at impacting walkabilit.
NASA Technical Reports Server (NTRS)
Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.
1995-01-01
Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bdzil, John Bohdan
The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver,more » and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-set function, phi, which did not rely on the gradient of the level-set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-set function, phi, which did not rely on the gradient of the level-set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.« less
A Comprehensive review on the open source hackable text editor-ATOM
NASA Astrophysics Data System (ADS)
Sumangali, K.; Borra, Lokesh; Suraj Mishra, Amol
2017-11-01
This document represents a comprehensive study of “Atom”, one of the best open-source code editors available with many features built-in to support multitude of programming environments and to provide a more productive toolset for developers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salinger, Andrew; Phipps, Eric; Ostien, Jakob
2016-01-13
The Albany code is a general-purpose finite element code for solving partial differential equations (PDEs). Albany is a research code that demonstrates how a PDE code can be built by interfacing many of the open-source software libraries that are released under Sandia's Trilinos project. Part of the mission of Albany is to be a testbed for new Trilinos libraries, to refine their methods, usability, and interfaces. Albany includes hooks to optimization and uncertainty quantification algorithms, including those in Trilinos as well as those in the Dakota toolkit. Because of this, Albany is a desirable starting point for new code developmentmore » efforts that wish to make heavy use of Trilinos. Albany is both a framework and the host for specific finite element applications. These applications have project names, and can be controlled by configuration option when the code is compiled, but are all developed and released as part of the single Albany code base, These include LCM, QCAD, FELIX, Aeras, and ATO applications.« less
Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehe, Remi; Kirchen, Manuel; Jalas, Soeren
The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
NASA Astrophysics Data System (ADS)
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
Rosenberg, Dori E; Huang, Deborah L; Simonovich, Shannon D; Belza, Basia
2013-04-01
To gain better understanding of how the built environment impacts neighborhood-based physical activity among midlife and older adults with mobility disabilities. We conducted in-depth interviews with 35 adults over age 50, which used an assistive device and lived in King County, Washington, U.S. In addition, participants wore global positioning systems (GPS) devices for 3 days prior to the interview. The GPS maps were used as prompts during the interviews. Open coding of the 35 interviews using latent content analysis resulted in key themes and subthemes that achieved consensus between coders. Two investigators independently coded the text of each interview. Participants were on average of 67 years of age (range: 50-86) and predominantly used canes (57%), walkers (57%), or wheelchairs (46%). Key themes pertained to curb ramp availability and condition, sidewalk availability and condition, hills, aesthetics, lighting, ramp availability, weather, presence and features of crosswalks, availability of resting places and shelter on streets, paved or smooth walking paths, safety, and traffic on roads. A variety of built environment barriers and facilitators to neighborhood-based activity exist for midlife and older adults with mobility disabilities. Preparing our neighborhood environments for an aging population that uses assistive devices will be important to foster independence and health.
A Fixed-Point Phase Lock Loop in a Software Defined Radio
2002-09-01
code from a simulation model. This feature will allow easy implementation on an FPGA as C can be easily converted to VHDL , the language required...this is equivalent to the MATLAB code implementation in Appendix A. The PD takes the input signal 40 and multiplies it by the in-phase and...stop to 60 mph in 3.1 seconds (the fastest production car ever built is the Porsche Carrera twin turbo which was tested at 0-60 mph in 3.1 seconds
NASA Astrophysics Data System (ADS)
Baran, Á.; Noszály, Cs.; Vertse, T.
2018-07-01
A renewed version of the computer code GAMOW (Vertse et al., 1982) is given in which the difficulties in calculating broad neutron resonances are amended. New types of phenomenological neutron potentials with strict finite range are built in. Landscape of the S-matrix can be generated on a given domain of the complex wave number plane and S-matrix poles in the domain are localized. Normalized Gamow wave functions and trajectories of given poles can be calculated optionally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
New, Joshua Ryan; Kumar, Jitendra; Hoffman, Forrest M.
Statement of the Problem: ASHRAE releases updates to 90.1 “Energy Standard for Buildings except Low-Rise Residential Buildings” every three years resulting in a 3.7%-17.3% increase in energy efficiency for buildings with each release. This is adopted by or informs building codes in nations across the globe, is the National Standard for the US, and individual states elect which release year of the standard they will enforce. These codes are built upon Standard 169 “Climatic Data for Building Design Standards,” the latest 2017 release of which defines climate zones based on 8, 118 weather stations throughout the world and data frommore » the past 8-25 years. This data may not be indicative of the weather that new buildings built today, will see during their upcoming 30-120 year lifespan. Methodology & Theoretical Orientation: Using more modern, high-resolution datasets from climate satellites, IPCC climate models (PCM and HadGCM), high performance computing resources (Titan) and new capabilities for clustering and optimization the authors briefly analyzed different methods for redefining climate zones. Using bottom-up analysis of multiple meteorological variables which were the subject matter, experts selected as being important to energy consumption, rather than the heating/cooling degree days currently used. Findings: We analyzed the accuracy of redefined climate zones, compared to current climate zones and how the climate zones moved under different climate change scenarios, and quantified the accuracy of these methods on a local level, at a national scale for the US. Conclusion & Significance: There is likely to be a significant annual, national energy and cost (billions USD) savings that could be realized by adjusting climate zones to take into account anticipated trends or scenarios in regional weather patterns.« less
NASA Technical Reports Server (NTRS)
Doane, George B., III; Armstrong, W. C.
1990-01-01
Research on propulsion stability (chugging and acoustic modes), and propellant valve control was investigated. As part of the activation of the new liquid propulsion test facilities, it is necessary to analyze total propulsion system stability. To accomplish this, several codes were built to run on desktop 386 machines. These codes enable one to analyze the stability question associated with the propellant feed systems. In addition, further work was adapted to this computing environment and furnished along with other codes. This latter inclusion furnishes those interested in high frequency oscillatory combustion behavior (that does not couple to the feed system) a set of codes for study of proposed liquid rocket engines.
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
5D Tempest simulations of kinetic edge turbulence
NASA Astrophysics Data System (ADS)
Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.; Umansky, M. V.; Qin, H.
2006-10-01
Results are presented from the development and application of TEMPEST, a nonlinear five dimensional (3d2v) gyrokinetic continuum code. The simulation results and theoretical analysis include studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry and its relationship to plasma flow generation with zero external momentum input, including the important orbit-squeezing effect due to the large electric field flow-shear in the edge. In order to extend the code to 5D, we have formulated a set of fully nonlinear electrostatic gyrokinetic equations and a fully nonlinear gyrokinetic Poisson's equation which is valid for both neoclassical and turbulence simulations. Our 5D gyrokinetic code is built on 4D version of Tempest neoclassical code with extension to a fifth dimension in binormal direction. The code is able to simulate either a full torus or a toroidal segment. Progress on performing 5D turbulence simulations will be reported.
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbride, Theresa L.
2009-03-30
This is a case study of the Lakeland, FLorida, Habitat for Humanity affiliate, which has partnered with DOE's Building America program to homes that achieve energy savings of 30% or more over the Building America baseline home (a home built to the 1993 Model Energy Code). The article includes a description of the energy-efficiency features used. The Lakeland affiliate built several of its homes with ducts in conditioned space, which minimizes heat losses and gains. They also used high-efficiency SEER 14 air conditioners; radiant barriers in the roof to keep attics cooler; above-code high-performance dual-pane vinyl-framed low-emissivity windows; a passivemore » fresh air duct to the air handler; and duct blaster and blower door testing of every home to ensure the home's air tightness. This case study was also prepared as a flier titled "High Performance Builder Spotlight: Lakeland Habitat for Humanity, Lakeland, Florida,: which was cleared as PNNL-SA-59068 and distributed at the International Builders’ Show Feb 13-16, 2008, in Orlando, Florida.« less
Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.
Springate, David A.; Kontopantelis, Evangelos; Ashcroft, Darren M.; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David
2014-01-01
Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects. PMID:24941260
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Robert; Moeller, Paul
Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less
Climate tools in mainstream Linux distributions
NASA Astrophysics Data System (ADS)
McKinstry, Alastair
2015-04-01
Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nikolai
MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less
On Frequency Offset Estimation Using the iNET Preamble in Frequency Selective Fading Channels
2014-03-01
ASM fields; (bottom) the relationship between the indexes of the received samples r(n), the signal samples s(n), the preamble samples p (n) and the short...frequency offset estimators for SOQPSK-TG equipped with the iNET preamble and operating in ISI channels. Four of the five estimators exam - ined here are...sync marker ( ASM ), and data bits (an LDPC codeword). The availability of a preamble introduces the possibility of data-aided synchro- nization in
Constructing a Sense of Story: One Block at a Time
ERIC Educational Resources Information Center
Robertson-Eletto, Joanne; Guha, Smita; Marinelli, Marina
2017-01-01
This photo essay focuses upon the literacy practices of two groups of preschoolers as they built, illustrated, and dictated stories in response to their participation in a "Castle Project." Data, including literacy artifacts, photodocumentation, sociodramatic play scenarios, and conversations are qualitatively analyzed, coded, and…
A Mercury Model of Atmospheric Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Alex B.; Chodash, Perry A.; Procassini, R. J.
Using the particle transport code Mercury, accurate models were built of the two sources used in Operation BREN, a series of radiation experiments performed by the United States during the 1960s. In the future, these models will be used to validate Mercury’s ability to simulate atmospheric transport.
49 CFR 41.117 - Buildings built with Federal assistance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... architect's authenticated verifications of seismic design codes, standards, and practices used in the design... financial assistance, after July 14, 1993 must be designed and constructed in accord with seismic standards... of compliance with the seismic design and construction requirements of this part is required prior to...
This manual was created to help homeowners choose sustainable strategies for restoring and rehabilitating many of the smaller, Victorian-style, wood-framed houses built in Northern California during the late 1800s and early 1900s.
Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel
NASA Technical Reports Server (NTRS)
Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John
2016-01-01
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.
Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel
NASA Technical Reports Server (NTRS)
Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.
2016-01-01
The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.
Rosenberg, Dori E.
2013-01-01
Purpose: To gain better understanding of how the built environment impacts neighborhood-based physical activity among midlife and older adults with mobility disabilities. Design and methods: We conducted in-depth interviews with 35 adults over age 50, which used an assistive device and lived in King County, Washington, U.S. In addition, participants wore Global Positioning Systems (GPS) devices for 3 days prior to the interview. The GPS maps were used as prompts during the interviews. Open coding of the 35 interviews using latent content analysis resulted in key themes and subthemes that achieved consensus between coders. Two investigators independently coded the text of each interview. Results: Participants were on average of 67 years of age (range: 50–86) and predominantly used canes (57%), walkers (57%), or wheelchairs (46%). Key themes pertained to curb ramp availability and condition, sidewalk availability and condition, hills, aesthetics, lighting, ramp availability, weather, presence and features of crosswalks, availability of resting places and shelter on streets, paved or smooth walking paths, safety, and traffic on roads. Implications: A variety of built environment barriers and facilitators to neighborhood-based activity exist for midlife and older adults with mobility disabilities. Preparing our neighborhood environments for an aging population that uses assistive devices will be important to foster independence and health. PMID:23010096
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Jeffrey S.
This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.
ERIC Educational Resources Information Center
Hollenbeck, Michelle D.
1997-01-01
For the past five years, Andover, Kansas middle-schoolers in an amateur radio club and class have sent and received Morse code messages, assembled and soldered circuit boards, designed and built antenna systems, and used computer programs to analyze radio communications problems. A successful bond issue financed a ham shack enabling students to…
Extended Colour--Some Methods and Applications.
ERIC Educational Resources Information Center
Dean, P. J.; Murkett, A. J.
1985-01-01
Describes how color graphics are built up on microcomputer displays and how a range of colors can be produced. Discusses the logic of color formation, noting that adding/subtracting color can be conveniently demonstrated. Color generating techniques in physics (resistor color coding and continuous spectrum production) are given with program…
The Need for Vendor Source Code at NAS. Revised
NASA Technical Reports Server (NTRS)
Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.
Stress analysis and evaluation of a rectangular pressure vessel
NASA Astrophysics Data System (ADS)
Rezvani, M. A.; Ziada, H. H.; Shurrab, M. S.
1992-10-01
This study addresses structural analysis and evaluation of an abnormal rectangular pressure vessel, designed to house equipment for drilling and collecting samples from Hanford radioactive waste storage tanks. It had to be qualified according to ASME boiler and pressure vessel code, section 8; however, it had the cover plate bolted along the long face, a configuration not addressed by the code. Finite element method was used to calculate stresses resulting from internal pressure; these stresses were then used to evaluate and qualify the vessel. Fatigue is not a concern; thus, it can be built according to section 8, division 1 instead of division 2. Stress analysis was checked against the code. A stayed plate was added to stiffen the long side of the vessel.
Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction
NASA Astrophysics Data System (ADS)
Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.
2013-12-01
We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.
SIGNUM: A Matlab, TIN-based landscape evolution model
NASA Astrophysics Data System (ADS)
Refice, A.; Giachetta, E.; Capolongo, D.
2012-08-01
Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.
Progressive Fracture of Fiber Composite Build-Up Structures
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, C. C.; Minnetyan, Levon
1997-01-01
Damage progression and fracture of built-up composite structures is evaluated by using computational simulation. The objective is to examine the behavior and response of a stiffened composite (0/ +/- 45/90)(sub s6) laminate panel by simulating the damage initiation, growth, accumulation, progression and propagation to structural collapse. An integrated computer code, CODSTRAN, was augmented for the simulation of the progressive damage and fracture of built-up composite structures under mechanical loading. Results show that damage initiation and progression have significant effect on the structural response. Influence of the type of loading is investigated on the damage initiation, propagation and final fracture of the build-up composite panel.
Progressive Fracture of Fiber Composite Build-Up Structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Gotsis, Pascal K.; Chamis, C. C.
1997-01-01
Damage progression and fracture of built-up composite structures is evaluated by using computational simulation. The objective is to examine the behavior and response of a stiffened composite (0 +/-45/90)(sub s6) laminate panel by simulating the damage initiation, growth, accumulation, progression and propagation to structural collapse. An integrated computer code CODSTRAN was augmented for the simulation of the progressive damage and fracture of built-up composite structures under mechanical loading. Results show that damage initiation and progression to have significant effect on the structural response. Influence of the type of loading is investigated on the damage initiation, propagation and final fracture of the build-up composite panel.
Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D
2018-04-30
The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.
An Overview of the XGAM Code and Related Software for Gamma-ray Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.
2014-11-13
The XGAM spectrum-fitting code and associated software were developed specifically to analyze the complex gamma-ray spectra that can result from neutron-induced reactions. The XGAM code is designed to fit a spectrum over the entire available gamma-ray energy range as a single entity, in contrast to the more traditional piecewise approaches. This global-fit philosophy enforces background continuity as well as consistency between local and global behavior throughout the spectrum, and in a natural way. This report presents XGAM and the suite of programs built around it with an emphasis on how they fit into an overall analysis methodology for complex gamma-raymore » data. An application to the analysis of time-dependent delayed gamma-ray yields from 235U fission is shown in order to showcase the codes and how they interact.« less
NASA Astrophysics Data System (ADS)
Palma, V.; Carli, M.; Neri, A.
2011-02-01
In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.
Mason, Craig A.; Lombard, Joanna L.; Martinez, Frank; Plater-Zyberk, Elizabeth; Spokane, Arnold R.; Newman, Frederick L.; Pantin, Hilda; Szapocznik, José
2009-01-01
Background Research on contextual and neighborhood effects increasingly includes the built (physical) environment's influences on health and social well-being. A population-based study examined whether architectural features of the built environment theorized to promote observations and social interactions (e.g., porches, windows) predict Hispanic elders’ psychological distress. Methods Coding of built environment features of all 3,857 lots across 403 blocks in East Little Havana, Florida, and enumeration of elders in 16,000 households was followed by assessments of perceived social support and psychological distress in a representative sample of 273 low socioeconomic status (SES) Hispanic elders. Structural-equation modeling was used to assess relationships between block-level built environment features, elders’ perceived social support, and psychological distress. Results Architectural features of the front entrance such as porches that promote visibility from a building's exterior were positively associated with perceived social support. In contrast, architectural features such as window areas that promote visibility from a building's interior were negatively associated with perceived social support. Perceived social support in turn was associated with reduced psychological distress after controlling for demographics. Additionally, perceived social support mediated the relationship of built environment variables to psychological distress. Conclusions Architectural features that facilitate direct, in-person interactions may be beneficial for Hispanic elders’ mental health. PMID:19196696
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Ultra Safe And Secure Blasting System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M
2009-07-27
The Ultra is a blasting system that is designed for special applications where the risk and consequences of unauthorized demolition or blasting are so great that the use of an extraordinarily safe and secure blasting system is justified. Such a blasting system would be connected and logically welded together through digital code-linking as part of the blasting system set-up and initialization process. The Ultra's security is so robust that it will defeat the people who designed and built the components in any attempt at unauthorized detonation. Anyone attempting to gain unauthorized control of the system by substituting components or tappingmore » into communications lines will be thwarted in their inability to provide encrypted authentication. Authentication occurs through the use of codes that are generated by the system during initialization code-linking and the codes remain unknown to anyone, including the authorized operator. Once code-linked, a closed system has been created. The system requires all components connected as they were during initialization as well as a unique code entered by the operator for function and blasting.« less
Regolith X-Ray Imaging Spectrometer (REXIS) Aboard the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Masterson, R. A.; Chodas, M.; Bayley, L.; Allen, B.; Hong, J.; Biswas, P.; McMenamin, C.; Stout, K.; Bokhour, E.; Bralower, H.; Carte, D.; Chen, S.; Jones, M.; Kissel, S.; Schmidt, F.; Smith, M.; Sondecker, G.; Lim, L. F.; Lauretta, D. S.; Grindlay, J. E.; Binzel, R. P.
2018-02-01
The Regolith X-ray Imaging Spectrometer (REXIS) is the student collaboration experiment proposed and built by an MIT-Harvard team, launched aboard NASA's OSIRIS-REx asteroid sample return mission. REXIS complements the scientific investigations of other OSIRIS-REx instruments by determining the relative abundances of key elements present on the asteroid's surface by measuring the X-ray fluorescence spectrum (stimulated by the natural solar X-ray flux) over the range of energies 0.5 to 7 keV. REXIS consists of two components: a main imaging spectrometer with a coded aperture mask and a separate solar X-ray monitor to account for the Sun's variability. In addition to element abundance ratios (relative to Si) pinpointing the asteroid's most likely meteorite association, REXIS also maps elemental abundance variability across the asteroid's surface using the asteroid's rotation as well as the spacecraft's orbital motion. Image reconstruction at the highest resolution is facilitated by the coded aperture mask. Through this operation, REXIS will be the first application of X-ray coded aperture imaging to planetary surface mapping, making this student-built instrument a pathfinder toward future planetary exploration. To date, 60 students at the undergraduate and graduate levels have been involved with the REXIS project, with the hands-on experience translating to a dozen Master's and Ph.D. theses and other student publications.
75 FR 26841 - Petition for Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... definition of ``antiquated'' is being built prior to the end of World War II, even though this equipment is... equipment in steam and diesel locomotive powered excursion service on the entire trackage of the Great Lakes... accordance with part 211 of title 49 Code of Federal Regulations (CFR), notice is hereby given that the...
Borescope Device Takes Impressions In Ducts
NASA Technical Reports Server (NTRS)
Walter, Richard F.; Turner, Laura J.
1990-01-01
Maneuverable device built around borescope equipped to make impression molds of welded joints in interior surfaces of ducts. Molds then examined to determine degress of mismatch in welds. Inserted in duct, and color-coded handles on ends of cables used to articulate head to maneuver around corners. Use of device fairly easy and requires little training.
Easy-to-Implement Project Integrates Basic Electronics and Computer Programming
ERIC Educational Resources Information Center
Johnson, Richard; Shackelford, Ray
2008-01-01
The activities described in this article give students excellent experience with both computer programming and basic electronics. During the activities, students will work in small groups, using a BASIC Stamp development board to fabricate digital circuits and PBASIC to write program code that will control the circuits they have built. The…
Towards a Consistent and Scientifically Accurate Drug Ontology.
Hogan, William R; Hanna, Josh; Joseph, Eric; Brochhausen, Mathias
2013-01-01
Our use case for comparative effectiveness research requires an ontology of drugs that enables querying National Drug Codes (NDCs) by active ingredient, mechanism of action, physiological effect, and therapeutic class of the drug products they represent. We conducted an ontological analysis of drugs from the realist perspective, and evaluated existing drug terminology, ontology, and database artifacts from (1) the technical perspective, (2) the perspective of pharmacology and medical science (3) the perspective of description logic semantics (if they were available in Web Ontology Language or OWL), and (4) the perspective of our realism-based analysis of the domain. No existing resource was sufficient. Therefore, we built the Drug Ontology (DrOn) in OWL, which we populated with NDCs and other classes from RxNorm using only content created by the National Library of Medicine. We also built an application that uses DrOn to query for NDCs as outlined above, available at: http://ingarden.uams.edu/ingredients. The application uses an OWL-based description logic reasoner to execute end-user queries. DrOn is available at http://code.google.com/p/dr-on.
Code JEF Facilities Engineering Home Page for the Internet
NASA Technical Reports Server (NTRS)
Mahaffey, Valerie A.; Harrison, Marla J. (Technical Monitor)
1995-01-01
There are always many activities going on in JEF. We work on and manage the Construction of Facilities (C of F) projects at NASA-Ames. We are constantly designing or analyzing a new facility or project, or a modification to an existing facility. Every day we answer numerous questions about engineering policy, codes and standards, we attend design reviews, we count dollars and we make sure that everything at the Center is designed and built according to good engineering judgment. In addition, we study literature and attend conferences to make sure that we keep current on new legislation and standards.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
Hybrid services efficient provisioning over the network coding-enabled elastic optical networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen
2017-03-01
As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.
Activation assessment of the soil around the ESS accelerator tunnel
NASA Astrophysics Data System (ADS)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.; Ene, D.
2018-06-01
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal directions. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order to estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.
Code-division multiple-access multiuser demodulator by using quantum fluctuations.
Otsubo, Yosuke; Inoue, Jun-Ichi; Nagata, Kenji; Okada, Masato
2014-07-01
We examine the average-case performance of a code-division multiple-access (CDMA) multiuser demodulator in which quantum fluctuations are utilized to demodulate the original message within the context of Bayesian inference. The quantum fluctuations are built into the system as a transverse field in the infinite-range Ising spin glass model. We evaluate the performance measurements by using statistical mechanics. We confirm that the CDMA multiuser modulator using quantum fluctuations achieve roughly the same performance as the conventional CDMA multiuser modulator through thermal fluctuations on average. We also find that the relationship between the quality of the original information retrieval and the amplitude of the transverse field is somehow a "universal feature" in typical probabilistic information processing, viz., in image restoration, error-correcting codes, and CDMA multiuser demodulation.
Code-division multiple-access multiuser demodulator by using quantum fluctuations
NASA Astrophysics Data System (ADS)
Otsubo, Yosuke; Inoue, Jun-ichi; Nagata, Kenji; Okada, Masato
2014-07-01
We examine the average-case performance of a code-division multiple-access (CDMA) multiuser demodulator in which quantum fluctuations are utilized to demodulate the original message within the context of Bayesian inference. The quantum fluctuations are built into the system as a transverse field in the infinite-range Ising spin glass model. We evaluate the performance measurements by using statistical mechanics. We confirm that the CDMA multiuser modulator using quantum fluctuations achieve roughly the same performance as the conventional CDMA multiuser modulator through thermal fluctuations on average. We also find that the relationship between the quality of the original information retrieval and the amplitude of the transverse field is somehow a "universal feature" in typical probabilistic information processing, viz., in image restoration, error-correcting codes, and CDMA multiuser demodulation.
Eisenberg, Yochai; Vanderbom, Kerri A; Vasudevan, Vijay
2017-02-01
The relationship between the built environment and physical activity has been well documented. However, little is known about how the built environment affects physical activity among people with disabilities, who have disproportionately higher rates of physical inactivity and obesity. This study is the first systematic review to examine the role of the built environment as a moderator of the relationship between having a disability (physical, sensory or cognitive) and lower levels of physical activity. After conducting an extensive search of the literature published between 1990 and 2015, 2039 articles were screened, 126 were evaluated by abstract and 66 by full text for eligibility in the review. Data were abstracted using a predefined coding guide and synthesized from both qualitative and quantitative studies to examine evidence of moderation. Nine quantitative and six qualitative articles met the inclusion criteria. Results showed that most research to date has been on older adults with physical disabilities. People with disabilities described how aspects of the built environment affect neighborhood walking, suggesting a positive moderating role of features related to safety and aesthetic qualities, such as benches, lighting and stop light timing. There were mixed results among studies that examined the relationship quantitatively. Most of the studies were not designed to appropriately examine moderation. Future research should utilize valid and reliable built environment measures that are more specific to disability and should include people with and without disabilities to allow for testing of moderation of the built environment. Copyright © 2016 Elsevier Inc. All rights reserved.
Heinemann, Allen W; Miskovic, Ana; Semik, Patrick; Wong, Alex; Dashner, Jessica; Baum, Carolyn; Magasi, Susan; Hammel, Joy; Tulsky, David S; Garcia, Sofia F; Jerousek, Sara; Lai, Jin-Shei; Carlozzi, Noelle E; Gray, David B
2016-12-01
To describe the unique and overlapping content of the newly developed Environmental Factors Item Banks (EFIB) and 7 legacy environmental factor instruments, and to evaluate the EFIB's construct validity by examining associations with legacy instruments. Cross-sectional, observational cohort. Community. A sample of community-dwelling adults with stroke, spinal cord injury, and traumatic brain injury (N=568). None. EFIB covering domains of the built and natural environment; systems, services, and policies; social environment; and access to information and technology; the Craig Hospital Inventory of Environmental Factors (CHIEF) short form; the Facilitators and Barriers Survey/Mobility (FABS/M) short form; the Home and Community Environment Instrument (HACE); the Measure of the Quality of the Environment (MQE) short form; and 3 of the Patient Reported Outcomes Measurement Information System's (PROMIS) Quality of Social Support measures. The EFIB and legacy instruments assess most of the International Classification of Functioning, Disability and Health (ICF) environmental factors chapters, including chapter 1 (products and technology; 75 items corresponding to 11 codes), chapter 2 (natural environment and human-made changes; 31 items corresponding to 7 codes), chapter 3 (support and relationships; 74 items corresponding to 7 codes), chapter 4 (attitudes; 83 items corresponding to 8 codes), and chapter 5 (services, systems, and policies; 72 items corresponding to 16 codes). Construct validity is provided by moderate correlations between EFIB measures and the CHIEF, MQE barriers, HACE technology mobility, FABS/M community built features, and PROMIS item banks and by small correlations with other legacy instruments. Only 5 of the 66 legacy instrument correlation coefficients are moderate, suggesting they measure unique aspects of the environment, whereas all intra-EFIB correlations were at least moderate. The EFIB measures provide a brief and focused assessment of ICF environmental factor chapters. The pattern of correlations with legacy instruments provides initial evidence of construct validity. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lawson, G. M.
2010-01-01
Professional discourse in education has been the focus of research conducted mostly with teachers and professional practitioners, but the work of students in the built environment has largely been ignored. This article presents an analysis of students' visual discourse in the final professional year of a landscape architecture course in Brisbane,…
ERIC Educational Resources Information Center
Huang, Hui-Wen; Wu, Chih-Wei; Chen, Nian-Shing
2012-01-01
The purpose of this study was to evaluate the effectiveness of using procedural scaffoldings in fostering students' group discourse levels and learning outcomes in a paper-plus-smartphone collaborative learning context. All participants used built-in camera smartphones to learn new knowledge by scanning Quick Response (QR) codes, a type of…
Recovering Knowledge for Science Education Research: Exploring the "Icarus Effect" in Student Work
ERIC Educational Resources Information Center
Georgiou, Helen; Maton, Karl; Sharma, Manjula
2014-01-01
Science education research has built a strong body of work on students' understandings but largely overlooked the nature of science knowledge itself. Legitimation Code Theory (LCT), a rapidly growing approach to education, offers a way of analyzing the organizing principles of knowledge practices and their effects on science education. This…
Fac-Back-OPAC: An Open Source Interface to Your Library System
ERIC Educational Resources Information Center
Beccaria, Mike; Scott, Dan
2007-01-01
The new Fac-Back-OPAC (a faceted backup OPAC) is built on code that was originally developed by Casey Durfee in February 2007. It represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system (ILS) and the use of readily available open source components to…
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
NASA Astrophysics Data System (ADS)
Russell, John L.; Campbell, John L.; Boyd, Nicholas I.; Dias, Johnny F.
2018-02-01
The newly developed GUMAP software creates element maps from OMDAQ list mode files, displays these maps individually or collectively, and facilitates on-screen definitions of specified regions from which a PIXE spectrum can be built. These include a free-hand region defined by moving the cursor. The regional charge is entered automatically into the spectrum file in a new GUPIXWIN-compatible format, enabling a GUPIXWIN analysis of the spectrum. The code defaults to the OMDAQ dead time treatment but also facilitates two other methods for dead time correction in sample regions with count rates different from the average.
A broad band X-ray imaging spectrophotometer for astrophysical studies
NASA Technical Reports Server (NTRS)
Lum, Kenneth S. K.; Lee, Dong Hwan; Ku, William H.-M.
1988-01-01
A broadband X-ray imaging spectrophotometer (BBXRIS) has been built for astrophysical studies. The BBXRIS is based on a large-imaging gas scintillation proportional counter (LIGSPC), a combination of a gas scintillation proportional counter and a multiwire proportional counter, which achieves 8 percent (FWHM) energy resolution and 1.5-mm (FWHM) spatial resolution at 5.9 keV. The LIGSPC can be integrated with a grazing incidence mirror and a coded aperture mask to provide imaging over a broad range of X-ray energies. The results of tests involving the LIGSPC and a coded aperture mask are presented, and possible applications of the BBXRIS are discussed.
The diagnosis related groups enhanced electronic medical record.
Müller, Marcel Lucas; Bürkle, Thomas; Irps, Sebastian; Roeder, Norbert; Prokosch, Hans-Ulrich
2003-07-01
The introduction of Diagnosis Related Groups as a basis for hospital payment in Germany announced essential changes in the hospital reimbursement practice. A hospital's economical survival will depend vitally on the accuracy and completeness of the documentation of DRG relevant data like diagnosis and procedure codes. In order to enhance physicians' coding compliance, an easy-to-use interface integrating coding tasks seamlessly into clinical routine had to be developed. A generic approach should access coding and clinical guidelines from different information sources. Within the Electronic Medical Record (EMR) a user interface ('DRG Control Center') for all DRG relevant clinical and administrative data has been built. A comprehensive DRG-related web site gives online access to DRG grouping software and an electronic coding expert. Both components are linked together using an application supporting bi-directional communication. Other web based services like a guideline search engine can be integrated as well. With the proposed method, the clinician gains quick access to context sensitive clinical guidelines for appropriate treatment of his/her patient and administrative guidelines for the adequate coding of the diagnoses and procedures. This paper describes the design and current implementation and discusses our experiences.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brier, Soren; Joslyn, Cliff A.
2013-04-01
This paper presents a critical analysis of code-semiotics, which we see as the latest attempt to create paradigmatic foundation for solving the question of the emergence of life and consciousness. We view code semiotics as a an attempt to revise the empirical scientific Darwinian paradigm, and to go beyond the complex systems, emergence, self-organization, and informational paradigms, and also the selfish gene theory of Dawkins and the Peircean pragmaticist semiotic theory built on the simultaneous types of evolution. As such it is a new and bold attempt to use semiotics to solve the problems created by the evolutionary paradigm’s commitmentmore » to produce a theory of how to connect the two sides of the Cartesian dualistic view of physical reality and consciousness in a consistent way.« less
Activation Assessment of the Soil Around the ESS Accelerator Tunnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.
Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 different chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal direction. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order tomore » estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.« less
Developments in REDES: The rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
Developments in REDES: The Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
Edwards, N
2008-10-01
The international introduction of performance-based building codes calls for a re-examination of indicators used to monitor their implementation. Indicators used in the building sector have a business orientation, target the life cycle of buildings, and guide asset management. In contrast, indicators used in the health sector focus on injury prevention, have a behavioural orientation, lack specificity with respect to features of the built environment, and do not take into account patterns of building use or building longevity. Suggestions for metrics that bridge the building and health sectors are discussed. The need for integrated surveillance systems in health and building sectors is outlined. It is time to reconsider commonly used epidemiological indicators in the field of injury prevention and determine their utility to address the accountability requirements of performance-based codes.
Implementation of context independent code on a new array processor: The Super-65
NASA Technical Reports Server (NTRS)
Colbert, R. O.; Bowhill, S. A.
1981-01-01
The feasibility of rewriting standard uniprocessor programs into code which contains no context-dependent branches is explored. Context independent code (CIC) would contain no branches that might require different processing elements to branch different ways. In order to investigate the possibilities and restrictions of CIC, several programs were recoded into CIC and a four-element array processor was built. This processor (the Super-65) consisted of three 6502 microprocessors and the Apple II microcomputer. The results obtained were somewhat dependent upon the specific architecture of the Super-65 but within bounds, the throughput of the array processor was found to increase linearly with the number of processing elements (PEs). The slope of throughput versus PEs is highly dependent on the program and varied from 0.33 to 1.00 for the sample programs.
NASA Astrophysics Data System (ADS)
Perez, Tracie Renea Conn
Over the past 15 years, there has been a growing interest in femtosatellites, a class of tiny satellites having mass less than 100 grams. Research groups from Peru, Spain, England, Canada, and the United States have proposed femtosat designs and novel mission concepts for them. In fact, Peru made history in 2013 by releasing the first - and still only - femtosat tracked from LEO. However, femtosatellite applications in interplanetary missions have yet to be explored in detail. An interesting operations concept would be for a space probe to release numerous femtosatellites into orbit around a planetary object of interest, thereby augmenting the overall data collection capability of the mission. A planetary probe releasing hundreds of femtosats could complete an in-situ, simultaneous 3D mapping of a physical property of interest, achieving scientific investigations not possible for one probe operating alone. To study the technical challenges associated with such a mission, a conceptual mission design is proposed where femtosats are deployed from a host satellite orbiting Titan. The conceptual mission objective is presented: to study Titan's dynamic atmosphere. Then, the design challenges are addressed in turn. First, any science payload measurements that the femtosats provide are only useful if their corresponding locations can be determined. Specifically, what's required is a method of position determination for femtosatellites operating beyond Medium Earth Orbit and therefore beyond the help of GPS. A technique is presented which applies Kalman filter techniques to Doppler shift measurements, allowing for orbit determination of the femtosats. Several case studies are presented demonstrating the usefulness of this approach. Second, due to the inherit power and computational limitations in a femtosatellite design, establishing a radio link between each chipsat and the mothersat will be difficult. To provide a mathematical gain, a particular form of forward error correction (FEC) method called low-density parity-check (LDPC) codes is recommended. A specific low-complexity encoder, and accompanying decoder, have been implemented in the open-source software radio library, GNU Radio. Simulation results demonstrating bit error rate (BER) improvement are presented. Hardware for implementing the LDPC methods in a benchtop test are described and future work on this topic is suggested. Third, the power and spatial constraints of femtosatellite designs likely restrict the payload to one or two sensors. Therefore, it is desired to extract as much useful scientific data as possible from secondary sources, such as radiometric data. Estimating the atmospheric density model from different measurement sources is simulated; results are presented. The overall goal for this effort is to advance the field of miniature spacecraft-based technology and to highlight the advantages of using femtosatellites in future planetary exploration missions. By addressing several subsystem design challenges in this context, such a femtosat mission concept is one step closer to being feasible.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
Performance analysis of parallel gravitational N-body codes on large GPU clusters
NASA Astrophysics Data System (ADS)
Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter
2016-01-01
We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Bracken, M B; Belanger, K; Hellenbrand, K; Addesso, K; Patel, S; Triche, E; Leaderer, B P
1998-09-01
The home wiring code is the most widely used metric for studies of residential electromagnetic field (EMF) exposure and health effects. Despite the fact that wiring code often shows stronger correlations with disease outcome than more direct EMF home assessments, little is known about potential confounders of the wiring code association. In a study carried out in southern Connecticut in 1988-1991, the authors used strict and widely used criteria to assess the wiring codes of 3,259 homes in which respondents lived. They also collected other home characteristics from the tax assessor's office, estimated traffic density around the home from state data, and interviewed each subject (2,967 mothers of reproductive age) for personal characteristics. Women who lived in very high current configuration wiring coded homes were more likely to be in manual jobs and their homes were older (built before 1949, odds ratio (OR) = 73.24, 95% confidence interval (CI) 29.53-181.65) and had lower assessed value and higher traffic densities (highest density quartile, OR = 3.99, 95% CI 1.17-13.62). Because some of these variables have themselves been associated with health outcomes, the possibility of confounding of the wiring code associations must be rigorously evaluated in future EMF research.
NASA Astrophysics Data System (ADS)
Konnik, Mikhail V.; Welsh, James
2012-09-01
Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
Simulations of linear and Hamming codes using SageMath
NASA Astrophysics Data System (ADS)
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Mazzei, Francesco; Gillan, Roslyn; Cloutier, Denise
2014-06-01
Limited research explores the experience of individuals with dementia in acute care geriatric psychiatry units. This observational case study examines the influence of the physical environment on behavior (wandering, pacing, door testing, congregation and seclusions) among residents in a traditional geriatric psychiatry unit who were then relocated to a purpose-built acute care unit. Purpose-built environments should be well suited to the needs of residents with dementia. Observed trends revealed differences in spatial behaviors in the pre- and post- environments attributable to the physical environment. Person-centred modifications to the current environment including concerted efforts to know residents are meaningful in fostering quality of life. Color coded environments (rooms vs dining areas etc.) to improve wayfinding and opportunities to personalize rooms that address the `hominess' of the setting also have potential. Future research could also seek the opinions of staff about the impact of the environment on them as well as residents. © The Author(s) 2013.
Flexible configuration-interaction shell-model many-body solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.
BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.
167. ARAIII Plot plan as of 1986. Shows most of ...
167. ARA-III Plot plan as of 1986. Shows most of original army buildings in addition to location for buildings ARA-621 and ARA-630, which were built in 1969 after army program had been canceled. Date: March 1986. Ineel index code no. 063-0100-00-220-421241. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
Sirepo for Synchrotron Radiation Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Robert; Moeller, Paul; Rakitin, Maksim
Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jinja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is the Synchrotron Radiation Workshop (SRW). SRW computes synchrotron radiation from relativistic electrons in arbitrary magnetic fields and propagates the radiation wavefronts through optical beamlines. SRW is open source and is primarily supported by Dr. Oleg Chubar of NSLS-II at Brookhaven National Laboratory.« less
Expert systems built by the Expert: An evaluation of OPS5
NASA Technical Reports Server (NTRS)
Jackson, Robert
1987-01-01
Two expert systems were written in OPS5 by the expert, a Ph.D. astronomer with no prior experience in artificial intelligence or expert systems, without the use of a knowledge engineer. The first system was built from scratch and uses 146 rules to check for duplication of scientific information within a pool of prospective observations. The second system was grafted onto another expert system and uses 149 additional rules to estimate the spacecraft and ground resources consumed by a set of prospective observations. The small vocabulary, the IF this occurs THEN do that logical structure of OPS5, and the ability to follow program execution allowed the expert to design and implement these systems with only the data structures and rules of another OPS5 system as an example. The modularity of the rules in OPS5 allowed the second system to modify the rulebase of the system onto which it was grafted without changing the code or the operation of that system. These experiences show that experts are able to develop their own expert systems due to the ease of programming and code reusability in OPS5.
Vector-matrix-quaternion, array and arithmetic packages: All HAL/S functions implemented in Ada
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Kwong, David D.
1986-01-01
The HAL/S avionics programmers have enjoyed a variety of tools built into a language tailored to their special requirements. Ada is designed for a broader group of applications. Rather than providing built-in tools, Ada provides the elements with which users can build their own. Standard avionic packages remain to be developed. These must enable programmers to code in Ada as they have coded in HAL/S. The packages under development at JPL will provide all of the vector-matrix, array, and arithmetic functions described in the HAL/S manuals. In addition, the linear algebra package will provide all of the quaternion functions used in Shuttle steering and Galileo attitude control. Furthermore, using Ada's extensibility, many quaternion functions are being implemented as infix operations; equivalent capabilities were never implemented in HAL/S because doing so would entail modifying the compiler and expanding the language. With these packages, many HAL/S expressions will compile and execute in Ada, unchanged. Others can be converted simply by replacing the implicit HAL/S multiply operator with the Ada *. Errors will be trapped and identified. Input/output will be convenient and readable.
Maestro and Castro: Simulation Codes for Astrophysical Flows
NASA Astrophysics Data System (ADS)
Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun
2017-01-01
Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.
Pulse Code Modulation (PCM) data storage and analysis using a microcomputer
NASA Technical Reports Server (NTRS)
Massey, D. E.
1986-01-01
A PCM storage device/data analyzer is described. This instrument is a peripheral plug-in board especially built to enable a personal computer to store and analyze data from a PCM source. This board and custom written software turns a computer into a snapshot PCM decommutator. This instrument will take in and store many hundreds or thousands of PCM telemetry data frames, then sift through them over and over again. The data can be converted to any number base and displayed, examined for any bit dropouts or changes in particular words or frames, graphically plotted, or statistically analyzed. This device was designed and built for use on the NASA Sounding Rocket Program for PCM encoder configuration and testing.
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less
Ethical pharmaceutical promotion and communications worldwide: codes and regulations
2014-01-01
The international pharmaceutical industry has made significant efforts towards ensuring compliant and ethical communication and interaction with physicians and patients. This article presents the current status of the worldwide governance of communication practices by pharmaceutical companies, concentrating on prescription-only medicines. It analyzes legislative, regulatory, and code-based compliance control mechanisms and highlights significant developments, including the 2006 and 2012 revisions of the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA) Code of Practice. Developments in international controls, largely built upon long-established rules relating to the quality of advertising material, have contributed to clarifying the scope of acceptable company interactions with healthcare professionals. This article aims to provide policy makers, particularly in developing countries, with an overview of the evolution of mechanisms governing the communication practices, such as the distribution of promotional or scientific material and interactions with healthcare stakeholders, relating to prescription-only medicines. PMID:24679064
Ethical pharmaceutical promotion and communications worldwide: codes and regulations.
Francer, Jeffrey; Izquierdo, Jose Zamarriego; Music, Tamara; Narsai, Kirti; Nikidis, Chrisoula; Simmonds, Heather; Woods, Paul
2014-03-29
The international pharmaceutical industry has made significant efforts towards ensuring compliant and ethical communication and interaction with physicians and patients. This article presents the current status of the worldwide governance of communication practices by pharmaceutical companies, concentrating on prescription-only medicines. It analyzes legislative, regulatory, and code-based compliance control mechanisms and highlights significant developments, including the 2006 and 2012 revisions of the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA) Code of Practice.Developments in international controls, largely built upon long-established rules relating to the quality of advertising material, have contributed to clarifying the scope of acceptable company interactions with healthcare professionals. This article aims to provide policy makers, particularly in developing countries, with an overview of the evolution of mechanisms governing the communication practices, such as the distribution of promotional or scientific material and interactions with healthcare stakeholders, relating to prescription-only medicines.
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Sierra/Solid Mechanics 4.48 User's Guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less
Construction and Utilization of a Beowulf Computing Cluster: A User's Perspective
NASA Technical Reports Server (NTRS)
Woods, Judy L.; West, Jeff S.; Sulyma, Peter R.
2000-01-01
Lockheed Martin Space Operations - Stennis Programs (LMSO) at the John C Stennis Space Center (NASA/SSC) has designed and built a Beowulf computer cluster which is owned by NASA/SSC and operated by LMSO. The design and construction of the cluster are detailed in this paper. The cluster is currently used for Computational Fluid Dynamics (CFD) simulations. The CFD codes in use and their applications are discussed. Examples of some of the work are also presented. Performance benchmark studies have been conducted for the CFD codes being run on the cluster. The results of two of the studies are presented and discussed. The cluster is not currently being utilized to its full potential; therefore, plans are underway to add more capabilities. These include the addition of structural, thermal, fluid, and acoustic Finite Element Analysis codes as well as real-time data acquisition and processing during test operations at NASA/SSC. These plans are discussed as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel; Vesselinov, Velimir V.
MADSpython (Model analysis and decision support tools in Python) is a code in Python that streamlines the process of using data and models for analysis and decision support using the code MADS. MADS is open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11-035). MADS can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. The Python scripts in MADSpython facilitate the generation of input and output file needed by MADS as wells as the external simulators which include FEHM and PFLOTRAN. MADSpython enables a number of data-more » and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. MADSpython will be released under GPL V3 license. MADSpython will be distributed as a Git repo at gitlab.com and github.com. MADSpython manual and documentation will be posted at http://madspy.lanl.gov.« less
Can Disability Code Activation Promote Sustainable Development in Egypt... After the Arab Spring?
Mahmoud Issa Abdou, Safaa
2015-01-01
In January 2011, Egypt followed Tunisia in its Uprisal against the ruling oppressive regimes in search for democracy, freedom and better living conditions. The movement, later known as the Arab Spring, had implications on the country's economic and political systems. Hence, the need to adopt Sustainable Development strategies and that in order to ensure all people well being, and the implementation of their human rights. This would only be realized when the built environment would become accessible to vulnerable people, as well as to persons with disabilities and would enable them to participate and be included in various living activities. This paper reviews the impact of the Egyptian disability code, that was published 2003, and how its activation could help to provide the environment that supports persons with disabilities, and allows their integration. Key Words: Disability Code; Sustainable Development; Arab Spring; Accessible Enabling Environment, People with Disabilities Integration.
1987-08-18
NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP I Synthetic enzymes...chymotrypsin; molecular modeling; 03 peptide synthesis 19. ABSTRACT (Continue on reverse if necessary and identify by block number) The object of this...for AChE. Additionally, synthetic models ofcL- chymotrypsin built using cyclo- dextrins show catalytic activity over a limited pH range.2 Using L
ERIC Educational Resources Information Center
Karatas, F. Ö.; Bodner, G. M.; Unal, Suat
2016-01-01
A study was conducted on the views of the nature of engineering held by 114 first-year engineering majors; the study built on prior work on views of the nature of science held by students, their instructors, and the general public. Open-coding analysis of responses to a 12-item questionnaire suggested that the participants held tacit beliefs that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Andrew; Haves, Philip; Jegi, Subhash
This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir; O'Malley, Daniel; Lin, Youzuo
2016-07-01
Mads.jl (Model analysis and decision support in Julia) is a code that streamlines the process of using data and models for analysis and decision support. It is based on another open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11- 035). Mads.jl can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. It enables a number of data- and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. The code also can use a series of alternative adaptive computational techniques for Bayesian sampling, Monte Carlo,more » and Bayesian Information-Gap Decision Theory. The code is implemented in the Julia programming language, and has high-performance (parallel) and memory management capabilities. The code uses a series of third party modules developed by others. The code development will also include contributions to the existing third party modules written in Julia; this contributions will be important for the efficient implementation of the algorithm used by Mads.jl. The code also uses a series of LANL developed modules that are developed by Dan O'Malley; these modules will be also a part of the Mads.jl release. Mads.jl will be released under GPL V3 license. The code will be distributed as a Git repo at gitlab.com and github.com. Mads.jl manual and documentation will be posted at madsjulia.lanl.gov.« less
NASA Technical Reports Server (NTRS)
Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.
1999-01-01
An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.
GISMO: A MATLAB toolbox for seismic research, monitoring, & education
NASA Astrophysics Data System (ADS)
Thompson, G.; Reyes, C. G.; Kempler, L. A.
2017-12-01
GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.
Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.
2016-12-01
Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
Tempest simulations of kinetic GAM mode and neoclassical turbulence
NASA Astrophysics Data System (ADS)
Xu, X. Q.; Dimits, A. M.
2007-11-01
TEMPEST is a nonlinear five dimensional (3d2v) gyrokinetic continuum code for studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry. The 4D TEMPEST code correctly produces frequency, collisionless damping of GAM and zonal flow with fully nonlinear Boltzmann electrons in homogeneous plasmas. For large q=4 to 9, the Tempest simulations show that a series of resonance at higher harmonics v||=φGqR0/n with n=4 become effective. The TEMPEST simulation also shows that GAM exists in edge plasma pedestal for steep density and temperature gradients, and an initial GAM relaxes to the standard neoclassical residual with neoclassical transport, rather than Rosenbluth-Hinton residual due to the presence of ion-ion collisions. The enhanced GAM damping explains experimental BES measurements on the edge q scaling of the GAM amplitude. Our 5D gyrokinetic code is built on 4D Tempest neoclassical code with extension to a fifth dimension in toroidal direction and with 3D domain decompositions. Progress on performing 5D neoclassical turbulence simulations will be reported.
Dubé, Anne Sophie; Beausoleil, Maude; Gosselin, Céline; Beaulme, Ginette; Paquin, Sophie; Pelletier, Anne; Goudreau, Sophie; Poirier, Marie-Hélène; Drouin, Louis; Gauvin, Lise
2014-07-09
1) To describe grassroots projects aimed at the built environment and associated with active transportation on the Island of Montreal; and 2) to examine associations between the number of projects and indicators of neighbourhood material and social deprivation and the built environment. We identified funding agencies and community groups conducting projects on built environments throughout the Island of Montreal. Through website consultation and a snowballing procedure, we inventoried projects that aimed at transforming built environments and that were carried out by community organizations between January 1, 2006, and November 1, 2010. We coded and validated information about project activities and created an interactive map using Geoclip software. Correlational analyses quantified associations between number of projects, neighbourhood characteristics and deprivation. A total of 134 community organizations were identified, and 183 grassroots projects were inventoried. A large number of projects were aimed at increasing awareness of/improving active or public transportation (n=95), improving road safety (n=84) and enhancing neighbourhood beautification and greening (n=69). The correlation between the presence of projects and the extent of neighbourhood material deprivation was small (Kendall's t=0.26, p<0.001), but in areas with greater social deprivation there were more projects (Kendall's t=0.38, p<0.001). Larger numbers of projects were also associated with the presence of more extensive land-use mix (Kendall's t=0.23, p<0.001) and a greater proportion of road intersections with injured pedestrians, cyclists and motor vehicle users (Kendall's t=0.43, p<0.001). There is significant community mobilization around built environments and active transportation. Investigations of the implementation processes and impacts are warranted.
A Data Warehouse to Support Condition Based Maintenance (CBM)
2005-05-01
Application ( VBA ) code sequence to import the original MAST-generated CSV and then create a single output table in DBASE IV format. The DBASE IV format...database architecture (Oracle, Sybase, MS- SQL , etc). This design includes table definitions, comments, specification of table attributes, primary and foreign...built queries and applications. Needs the application developers to construct data views. No SQL programming experience. b. Power Database User - knows
Wavelength Coded Image Transmission and Holographic Optical Elements.
1984-08-20
system has been designed and built for transmitting images of diffusely reflecting objects through optical fibers and displaying those images at a...passive components at the end of a fiber-optic designed to transmit high-resolution images of diffusely imaging system as described in this paper... designing a system for viewing diffusely reflecting The authors are with University of Minnesota. Electrical Engi- objects, one must consider that a
Detection and Classification of Objects in Synthetic Aperture Radar Imagery
2006-02-01
a higher False Alarm Rate (FAR). Currently, a standard edge detector is the Canny algorithm, which is available with the mathematics package MATLAB ...the algorithm used to calculate the Radon transform. The MATLAB implementation uses the built in Radon transform procedure, which is extremely... MATLAB code for a faster forward-backwards selection process has also been provided. In both cases, the feature selection was accomplished by using
Dynamic analysis of flexible mechanical systems using LATDYN
NASA Technical Reports Server (NTRS)
Wu, Shih-Chin; Chang, Che-Wei; Housner, Jerrold M.
1989-01-01
A 3-D, finite element based simulation tool for flexible multibody systems is presented. Hinge degrees-of-freedom is built into equations of motion to reduce geometric constraints. The approach avoids the difficulty in selecting deformation modes for flexible components by using assumed mode method. The tool is applied to simulate a practical space structure deployment problem. Results of examples demonstrate the capability of the code and approach.
NASA Astrophysics Data System (ADS)
Manfreda, G.; Bellina, F.
2016-12-01
The paper describes the new lumped thermal model recently implemented in THELMA code for the coupled electromagnetic-thermal analysis of superconducting cables. A new geometrical model is also presented, which describes the Rutherford cables used for the accelerator magnets. A first validation of these models has been given by the analysis of the quench longitudinal propagation velocity in the Nb3Sn prototype coil SMC3, built and tested in the frame of the EUCARD project for the development of high field magnets for LHC machine. This paper shows in detail the models, while their application to the quench propagation analysis is presented in a companion paper.
Rep. Burgess, Michael C. [R-TX-26
2014-06-10
House - 06/11/2014 On agreeing to the resolution Agreed to by recorded vote: 227 - 189 (Roll no. 299). (All Actions) Tracker: This bill has the status Agreed to in HouseHere are the steps for Status of Legislation:
Introducing a New Software for Geodetic Analysis
NASA Astrophysics Data System (ADS)
Hjelle, Geir Arne; Dähnn, Michael; Fausk, Ingrid; Kirkvik, Ann-Silje; Mysen, Eirik
2017-04-01
At the Norwegian Mapping Authority, we are currently developing Where, a new software for geodetic analysis. Where is built on our experiences with the Geosat software, and will be able to analyse and combine data from VLBI, SLR, GNSS and DORIS. The software is mainly written in Python which has proved very fruitful. The code is quick to write and the architecture is easily extendable and maintainable, while at the same time taking advantage of well-tested code like the SOFA and IERS libraries. This presentation will show some of the current capabilities of Where, including benchmarks against other software packages, and outline our plans for further progress. In addition we will report on some investigations we have done experimenting with alternative weighting strategies for VLBI.
NASA Technical Reports Server (NTRS)
Sharma, Naveen
1992-01-01
In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.
A Comprehensive High Performance Predictive Tool for Fusion Liquid Metal Hydromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Peter; Chhabra, Rupanshi; Munipalli, Ramakanth
In Phase I SBIR project, HyPerComp and Texcel initiated the development of two induction-based MHD codes as a predictive tool for fusion hydro-magnetics. The newly-developed codes overcome the deficiency of other MHD codes based on the quasi static approximation by defining a more general mathematical model that utilizes the induced magnetic field rather than the electric potential as the main electromagnetic variable. The UCLA code is a finite-difference staggered-mesh code that serves as a supplementary tool to the massively-parallel finite-volume code developed by HyPerComp. As there is no suitable experimental data under blanket-relevant conditions for code validation, code-to-code comparisons andmore » comparisons against analytical solutions were successfully performed for three selected test cases: (1) lid-driven MHD flow, (2) flow in a rectangular duct in a transverse magnetic field, and (3) unsteady finite magnetic Reynolds number flow in a rectangular enclosure. The performed tests suggest that the developed codes are accurate and robust. Further work will focus on enhancing the code capabilities towards higher flow parameters and faster computations. At the conclusion of the current Phase-II Project we have completed the preliminary validation efforts in performing unsteady mixed-convection MHD flows (against limited data that is currently available in literature), and demonstrated flow behavior in large 3D channels including important geometrical features. Code enhancements such as periodic boundary conditions, unmatched mesh structures are also ready. As proposed, we have built upon these strengths and explored a much increased range of Grashof numbers and Hartmann numbers under various flow conditions, ranging from flows in a rectangular duct to prototypic blanket modules and liquid metal PFC. Parametric studies, numerical and physical model improvements to expand the scope of simulations, code demonstration, and continued validation activities have also been completed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
This is the first code, designed to run on a desktop, which models the intracellular replication and the cell-to-cell infection and demonstrates virus evolution at the molecular level. This code simulates the infection of a population of "idealized biological cells" (represented as objects that do not divide or have metabolism) with "virus" (represented by its genetic sequence), the replication and simultaneous mutation of the virus which leads to evolution of the population of genetically diverse viruses. The code is built to simulate single-stranded RNA viruses. The input for the code is 1. the number of biological cells in the culture,more » 2. the initial composition of the virus population, 3. the reference genome of the RNA virus, 4. the coordinates of the genome regions and their significance and, 5. parameters determining the dynamics of virus replication, such as the mutation rate. The simulation ends when all cells have been infected or when no more infections occurs after a given number of attempts. The code has the ability to simulate the evolution of the virus in serial passage of cell "cultures", i.e. after the end of a simulation, a new one is immediately scheduled with a new culture of infected cells. The code outputs characteristics of the resulting virus population dynamics and genetic composition of the virus population, such as the top dominant genomes, percentage of a genome with specific characteristics.« less
The historical, ethical, and legal background of human-subjects research.
Rice, Todd W
2008-10-01
The current system of human-subject-research oversight and protections has developed over the last 5 decades. The principles of conducting human research were first developed as the Nuremberg code to try Nazi war criminals. The 3 basic elements of the Nuremberg Code (voluntary informed consent, favorable risk/benefit analysis, and right to withdraw without repercussions) became the foundation for subsequent ethical codes and research regulations. In 1964 the World Medical Association released the Declaration of Helsinki, which built on the principles of the Nuremberg Code. Numerous research improprieties between 1950 and 1974 in the United States prompted Congressional deliberations about human-subject-research oversight. Congress's first legislation to protect the rights and welfare of human subjects was the National Research Act of 1974, which created the National Commission for Protection of Human Subjects of Biomedical and Behavioral Research, which issued the Belmont Report. The Belmont Report stated 3 fundamental principles for conducting human-subjects research: respect for persons, beneficence, and justice. The Office of Human Research Protections oversees Title 45, Part 46 of the Code for Federal Regulations, which pertains to human-subjects research. That office indirectly oversees human-subjects research through local institutional review boards (IRB). Since their inception, the principles of conducting human research, IRBs, and the Code for Federal Regulations have all advanced substantially. This paper describes the history and current status of human-subjects-research regulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sienicki, J.J.
A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executedmore » on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.« less
Optimizing ATLAS code with different profilers
NASA Astrophysics Data System (ADS)
Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.
2014-06-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.
Migration of legacy mumps applications to relational database servers.
O'Kane, K C
2001-07-01
An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
Research Support Facility (RSF): Leadership in Building Performance (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This brochure/poster provides information on the features of the Research Support Facility including a detailed illustration of the facility with call outs of energy efficiency and renewable energy technologies. Imagine an office building so energy efficient that its occupants consume only the amount of energy generated by renewable power on the building site. The building, the Research Support Facility (RSF) occupied by the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) employees, uses 50% less energy than if it were built to current commercial code and achieves the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED{reg_sign})more » Platinum rating. With 19% of the primary energy in the U.S. consumed by commercial buildings, the RSF is changing the way commercial office buildings are designed and built.« less
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2016-11-01
Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The most popular are coherent techniques such as double random phase encoding. Its main advantage is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. Recently, encryption of digital information in form of binary images has become quite popular. Advantages of using quick response (QR) code in capacity of data container for optical encryption include: 1) any data represented as QR code will have close to white (excluding zero spatial frequency) Fourier spectrum which have good overlapping with encryption key spectrum; 2) built-in algorithm for image scale and orientation correction which simplifies decoding of decrypted QR codes; 3) embedded error correction code allows for successful decryption of information even in case of partial corruption of decrypted image. Optical encryption of digital data in form QR codes using spatially incoherent illumination was experimentally implemented. Two liquid crystal spatial light modulators were used in experimental setup for QR code and encrypting kinoform imaging respectively. Decryption was conducted digitally. Successful decryption of encrypted QR codes is demonstrated.
NASA Astrophysics Data System (ADS)
Bognot, J. R.; Candido, C. G.; Blanco, A. C.; Montelibano, J. R. Y.
2018-05-01
Monitoring the progress of building's construction is critical in construction management. However, measuring the building construction's progress are still manual, time consuming, error prone, and impose tedious process of analysis leading to delays, additional costings and effort. The main goal of this research is to develop a methodology for building construction progress monitoring based on 3D as-built model of the building from unmanned aerial system (UAS) images, 4D as-planned model (with construction schedule integrated) and, GIS analysis. Monitoring was done by capturing videos of the building with a camera-equipped UAS. Still images were extracted, filtered, bundle-adjusted, and 3D as-built model was generated using open source photogrammetric software. The as-planned model was generated from digitized CAD drawings using GIS. The 3D as-built model was aligned with the 4D as-planned model of building formed from extrusion of building elements, and integration of the construction's planned schedule. The construction progress is visualized via color-coding the building elements in the 3D model. The developed methodology was conducted and applied from the data obtained from an actual construction site. Accuracy in detecting `built' or `not built' building elements ranges from 82-84 % and precision of 50-72 %. Quantified progress in terms of the number of building elements are 21.31% (November 2016), 26.84 % (January 2017) and 44.19 % (March 2017). The results can be used as an input for progress monitoring performance of construction projects and improving related decision-making process.
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
The Julia programming language: the future of scientific computing
NASA Astrophysics Data System (ADS)
Gibson, John
2017-11-01
Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Astrometry with A-Track Using Gaia DR1 Catalogue
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Erece, Orhan; Kaplan, Murat
2018-04-01
In this work, we built all sky index files from Gaia DR1 catalogue for the high-precision astrometric field solution and the precise WCS coordinates of the moving objects. For this, we used build-astrometry-index program as a part of astrometry.net code suit. Additionally, we added astrometry.net's WCS solution tool to our previously developed software which is a fast and robust pipeline for detecting moving objects such as asteroids and comets in sequential FITS images, called A-Track. Moreover, MPC module was added to A-Track. This module is linked to an asteroid database to name the found objects and prepare the MPC file to report the results. After these innovations, we tested a new version of the A-Track code on photometrical data taken by the SI-1100 CCD with 1-meter telescope at TÜBİTAK National Observatory, Antalya. The pipeline can be used to analyse large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
Graphical User Interface for the NASA FLOPS Aircraft Performance and Sizing Code
NASA Technical Reports Server (NTRS)
Lavelle, Thomas M.; Curlett, Brian P.
1994-01-01
XFLOPS is an X-Windows/Motif graphical user interface for the aircraft performance and sizing code FLOPS. This new interface simplifies entering data and analyzing results, thereby reducing analysis time and errors. Data entry is simpler because input windows are used for each of the FLOPS namelists. These windows contain fields to input the variable's values along with help information describing the variable's function. Analyzing results is simpler because output data are displayed rapidly. This is accomplished in two ways. First, because the output file has been indexed, users can view particular sections with the click of a mouse button. Second, because menu picks have been created, users can plot engine and aircraft performance data. In addition, XFLOPS has a built-in help system and complete on-line documentation for FLOPS.
Pulse Code Modulation (PCM) data storage and analysis using a microcomputer
NASA Technical Reports Server (NTRS)
Massey, D. E.
1986-01-01
The current widespread use of microcomputers has led to the creation of some very low-cost instrumentation. A Pulse Code Modulation (PCM) storage device/data analyzer -- a peripheral plug-in board especially constructed to enable a personal computer to store and analyze data from a PCM source -- was designed and built for use on the NASA Sounding Rocket Program for PMC encoder configuration and testing. This board and custom-written software turns a computer into a snapshot PCM decommutator which will accept and store many hundreds or thousands of PCM telemetry data frames, then sift through them repeatedly. These data can be converted to any number base and displayed, examined for any bit dropouts or changes (in particular, words or frames), graphically plotted, or statistically analyzed.
KilBride, A L; Mason, S A; Honeyman, P C; Pritchard, D G; Hepple, S; Green, L E
2012-02-11
Animal health (AH) defines the outcome of their inspections of livestock holdings as full compliance with the legislation and welfare code (A), compliance with the legislation but not the code (B), non-compliance with legislation but no pain, distress or suffering obvious in the animals (C) or evidence of unnecessary pain or unnecessary distress (D). The aim of the present study was to investigate whether membership of farm assurance or organic certification schemes was associated with compliance with animal welfare legislation as inspected by AH. Participating schemes provided details of their members, past and present, and these records were matched against inspection data from AH. Multivariable multilevel logistic binomial models were built to investigate the association between compliance with legislation and membership of a farm assurance/organic scheme. The percentage of inspections coded A, B, C or D was 37.1, 35.6, 20.2 and 7.1 per cent, respectively. Once adjusted for year, country, enterprise, herd size and reason for inspection, there was a pattern of significantly reduced risk of codes C and D compared with A and B, in certified enterprises compared with the enterprises that were not known to be certified in all species.
A method for radiological characterization based on fluence conversion coefficients
NASA Astrophysics Data System (ADS)
Froeschl, Robert
2018-06-01
Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.
Kamel Boulos, Maged N; Roudsari, Abdul V; Carso N, Ewart R
2002-12-01
HealthCyberMap (HCM-http://healthcybermap.semanticweb.org) is a web-based service for healthcare professionals and librarians, patients and the public in general that aims at mapping parts of the health information resources in cyberspace in novel ways to improve their retrieval and navigation. HCM adopts a clinical metadata framework built upon a clinical coding ontology for the semantic indexing, classification and browsing of Internet health information resources. A resource metadata base holds information about selected resources. HCM then uses GIS (Geographic Information Systems) spatialization methods to generate interactive navigational cybermaps from the metadata base. These visual cybermaps are based on familiar medical metaphors. HCM cybermaps can be considered as semantically spatialized, ontology-based browsing views of the underlying resource metadata base. Using a clinical coding scheme as a metric for spatialization ('semantic distance') is unique to HCM and is very much suited for the semantic categorization and navigation of Internet health information resources. Clinical codes ensure reliable and unambiguous topical indexing of these resources. HCM also introduces a useful form of cyberspatial analysis for the detection of topical coverage gaps in the resource metadata base using choropleth (shaded) maps of human body systems.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Git as an Encrypted Distributed Version Control System
2015-03-01
options. The algorithm uses AES- 256 counter mode with an IV derived from SHA -1-HMAC hash (this is nearly identical to the GCM mode discussed earlier...built into the internal structure of Git. Every file in a Git repository is check summed with a SHA -1 hash, a one-way function with arbitrarily long...implementation. Git-encrypt calls OpenSSL cryptography library command line functions. The default cipher used is AES- 256 - Electronic Code Book (ECB), which is
Direct Digital Control of HVAC (Heating, Ventilating, and Air Conditioning Equipment (User’s Guide)
1985-01-01
reset, load shedding, chiller optimization , VAV fan synchronization, and optimum start/stop. The prospective buyer of a DDC system should investigate...current and accurate drawings for a conventional, built-up control system such as that illustrated in Fig- ure 4. Data on setpoints , reset schedules, and...are always available in the form of the computer program code (Figure 7). In addition to the control logic, setpoint and other data are readily
2007-03-31
iterating to the end-time step. 1.3 Code Verification 1.3.1 Statement of the Problem A square aluminum alloy plate (thickness = 1.02 mm, width and...plate. The electro-mechanical properties of the piezoelectric materials (APC850) are available from American Piezoceramics, Inc. . The piezoceramic...structural usage and provide an early indication of physical damage. Piezoelectric (PZT) based SHM systems are among the most widely used for active and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, David; Klise, Katherine A.
The PyEPANET package is a set of commands for the Python programming language that are built to wrap the EPANET toolkit library commands, without requiring the end user to program using the ctypes package. This package does not contain the EPANET code, nor does it implement the functions within the EPANET software, and it requires the separately downloaded or compiled EPANET2 toolkit dynamic library (epanet.dll, libepanent.so, or epanet.dylib) and/or the EPANET-MSX dynamic library in order to function.
Four Frames Suffice. A Provisionary Model of Vision and Space,
1982-09-01
0 * / Justifi ati AvailabilitY Codes 1. Introduction This paper is an attempt to specify’ a computationally and scientifically plausible model of how...abstract neural compuiting unit and a variety of construtions built of these units and their properties. All of this is part of the connectionist...chosen are inlerided to elucidate the nia’or scientific problems in intermediate level vision and would not be the best choice or a practical computer
Implementation of a Portable Personal EKG Signal Monitoring System
NASA Astrophysics Data System (ADS)
Tan, Tan-Hsu; Chang, Ching-Su; Chen, Yung-Fu; Lee, Cheng
This research develops a portable personal EKG signal monitoring system to help patients monitor their EKG signals instantly to avoid the occurrence of tragedies. This system is built with two main units: signal pro-cessing unit and monitoring and evaluation unit. The first unit consists of EKG signal sensor, signal amplifier, digitalization circuit, and related control circuits. The second unit is a software tool developed on an embedded Linux platform (called CSA). Experimental result indicates that the proposed system has the practical potential for users in health monitoring. It is demonstrated to be more convenient and with greater portability than the conventional PC-based EKG signal monitoring systems. Furthermore, all the application units embedded in the system are built with open source codes, no licensed fee is required for operating systems and authorized applications. Thus, the building cost is much lower than the traditional systems.
Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography.
Jaramillo, Juan; Zarzycki, Artur; Galeano, July; Sandoz, Patrick
2017-01-31
This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP) observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 μm, which is an acceptable value for most of research and development (R and D) microtechnology developments in which the typical feature size used is in the micrometer range.
Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade
NASA Technical Reports Server (NTRS)
Giel, Paul W.
2000-01-01
Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.
Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography
Jaramillo, Juan; Zarzycki, Artur; Galeano, July; Sandoz, Patrick
2017-01-01
This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP) observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 µm, which is an acceptable value for most of research and development (R and D) microtechnology developments in which the typical feature size used is in the micrometer range. PMID:28146126
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
ELEFANT: a user-friendly multipurpose geodynamics code
NASA Astrophysics Data System (ADS)
Thieulot, C.
2014-07-01
A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Preserving privacy of online digital physiological signals using blind and reversible steganography.
Shiu, Hung-Jr; Lin, Bor-Sing; Huang, Chien-Hung; Chiang, Pei-Ying; Lei, Chin-Laung
2017-11-01
Physiological signals such as electrocardiograms (ECG) and electromyograms (EMG) are widely used to diagnose diseases. Presently, the Internet offers numerous cloud storage services which enable digital physiological signals to be uploaded for convenient access and use. Numerous online databases of medical signals have been built. The data in them must be processed in a manner that preserves patients' confidentiality. A reversible error-correcting-coding strategy will be adopted to transform digital physiological signals into a new bit-stream that uses a matrix in which is embedded the Hamming code to pass secret messages or private information. The shared keys are the matrix and the version of the Hamming code. An online open database, the MIT-BIH arrhythmia database, was used to test the proposed algorithms. The time-complexity, capacity and robustness are evaluated. Comparisons of several evaluations subject to related work are also proposed. This work proposes a reversible, low-payload steganographic scheme for preserving the privacy of physiological signals. An (n, m)-hamming code is used to insert (n - m) secret bits into n bits of a cover signal. The number of embedded bits per modification is higher than in comparable methods, and the computational power is efficient and the scheme is secure. Unlike other Hamming-code based schemes, the proposed scheme is both reversible and blind. Copyright © 2017 Elsevier B.V. All rights reserved.
A&M. TAN607 floor plan for first floor. Shows stepped door ...
A&M. TAN-607 floor plan for first floor. Shows stepped door plug design from hot shop into special services cubicle, cubicle windows, and other details. This drawing was re-drawn to show as-built conditions in 1985. Ralph M. Parsons 902-3-ANP-607-A 99. Date of original: January 1955. Approved by INEEL Classification Office for public release. INEEL index code no. 034-0607-00-693-106751 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sussman, Alan
2014-10-21
This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.
Electric power from vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Touryan, K. J.; Strickland, J. H.; Berg, D. E.
1987-12-01
Significant advancements have occurred in vertical axis wind turbine (VAWT) technology for electrical power generation over the last decade; in particular, well-proven aerodynamic and structural analysis codes have been developed for Darrieus-principle wind turbines. Machines of this type have been built by at least three companies, and about 550 units of various designs are currently in service in California wind farms. Attention is presently given to the aerodynamic characteristics, structural dynamics, systems engineering, and energy market-penetration aspects of VAWTs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
RLL-1: A Representation Language Language. Supplement. Details of RLL-1
1980-10-01
should know are listed first, organized by topic. These top level functions place essentially no restriction on the nature of the knowlede base on which...variables. The various functions and variables which, comprise CORLL, (see [SmithD) may be used as well. (Recall RLL-1 is built on this unit- management system...rather than the compiled code; and asking where to store this new function. .V (This is a simple database management facility.] I told it to store this
Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; ...
2016-08-31
Researcher feedback has indicated that in Urzhumtsevet al.[(2015)Acta Cryst.D71, 1668–1683] clarification of key parts of the algorithm for interpretation of TLS matrices in terms of elemental atomic motions and corresponding ensembles of atomic models is required. Also, it has been brought to the attention of the authors that the incorrect PDB code was reported for one of test models. Lastly, these issues are addressed in this article.
Shear Banding in a Partially Molten Mantle
NASA Astrophysics Data System (ADS)
Alisic, L.; Rudge, J. F.; Wells, G.; Katz, R. F.; Rhebergen, S.
2013-12-01
We investigate the nonlinear behaviour of partially molten mantle material under shear. Numerical models of compaction and advection-diffusion of a porous matrix with a spherical inclusion are built using the automated code generation package FEniCS. The time evolution of melt distribution with increasing shear in these models is compared to laboratory experiments that show high-porosity shear banding in the medium and pressure shadows around the inclusion. We focus on understanding the interaction between these shear bands and pressure shadows as a function of rheological parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
This builder was honored with an Affordable Builder award in the 2014 Housing Innovation Awards, for the first retrofit home certified to the DOE Zero Energy Ready home requirements.The 60-year-old, three-bedroom ranch home is expected to save its homeowner more than $1,000 a year in utility bills compared to a home built to the current 2009 International Energy Conservation Code.
Frapid: achieving full automation of FRAP for chemical probe validation
Yapp, Clarence; Rogers, Catherine; Savitsky, Pavel; Philpott, Martin; Müller, Susanne
2016-01-01
Fluorescence Recovery After Photobleaching (FRAP) is an established method for validating chemical probes against the chromatin reading bromodomains, but so far requires constant human supervision. Here, we present Frapid, an automated open source code implementation of FRAP that fully handles cell identification through fuzzy logic analysis, drug dispensing with a custom-built fluid handler, image acquisition & analysis, and reporting. We successfully tested Frapid on 3 bromodomains as well as on spindlin1 (SPIN1), a methyl lysine binder, for the first time. PMID:26977352
DOE Office of Scientific and Technical Information (OSTI.GOV)
This software is an iOS (Apple) Augmented Reality (AR) application that runs on the iPhone and iPad. It is designed to scan in a photograph or graphic and "play" an associated video. This release, SNLSimMagic, was built using Wikitude Augmented Reality (AR) software development kit (SDK) integrated into Apple iOS SDK application and the Cordova libraries. These codes enable the generation of runtime targets using cloud recognition and developer-defined target features which are then accessed by means of a custom application.
Application of Advanced Multi-Core Processor Technologies to Oceanographic Research
2014-09-30
Jordan Stanway are taking on the work of analyzing their code, and we are working on the Robot Operating System (ROS) and MOOS-DB systems to evaluate...Linux/GNU operating system that should reduce the time required to build the kernel and userspace significantly. This part of the work is vital to...the platform to be used not only as a service, but also as a private deployable package. As much as possible, this system was built using operating
JSC Wireless Sensor Network Update
NASA Technical Reports Server (NTRS)
Wagner, Robert
2010-01-01
Sensor nodes composed of three basic components... radio module: COTS radio module implementing standardized WSN protocol; treated as WSN modem by main board main board: contains application processor (TI MSP430 microcontroller), memory, power supply; responsible for sensor data acquisition, pre-processing, and task scheduling; re-used in every application with growing library of embedded C code sensor card: contains application-specific sensors, data conditioning hardware, and any advanced hardware not built into main board (DSPs, faster A/D, etc.); requires (re-) development for each application.
Introducing a New Software for Geodetic Analysis
NASA Astrophysics Data System (ADS)
Hjelle, G. A.; Dähnn, M.; Fausk, I.; Kirkvik, A. S.; Mysen, E.
2016-12-01
At the Norwegian Mapping Authority, we are currently developing Where, a newsoftware for geodetic analysis. Where is built on our experiences with theGeosat software, and will be able to analyse and combine data from VLBI, SLR,GNSS and DORIS. The software is mainly written in Python which has proved veryfruitful. The code is quick to write and the architecture is easily extendableand maintainable. The Python community provides a rich eco-system of tools fordoing data-analysis, including effective data storage and powerfulvisualization. Python interfaces well with other languages so that we can easilyreuse existing, well-tested code like the SOFA and IERS libraries. This presentation will show some of the current capabilities of Where,including benchmarks against other software packages. In addition we will reporton some simple investigations we have done using the software, and outline ourplans for further progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaughter, D.
1985-03-01
A computer code is described which estimates the energy spectrum or ''line-shape'' for the charged particles and ..gamma..-rays produced by the fusion of low-z ions in a hot plasma. The simulation has several ''built-in'' ion velocity distributions characteristic of heated plasmas and it also accepts arbitrary speed and angular distributions although they must all be symmetric about the z-axis. An energy spectrum of one of the reaction products (ion, neutron, or ..gamma..-ray) is calculated at one angle with respect to the symmetry axis. The results are shown in tabular form, they are plotted graphically, and the moments of the spectrummore » to order ten are calculated both with respect to the origin and with respect to the mean.« less
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
Force-free electrodynamics in dynamical curved spacetimes
NASA Astrophysics Data System (ADS)
McWilliams, Sean
2015-04-01
We present results on our study of force-free electrodynamics in curved spacetimes. Specifically, we present several improvements to what has become the established set of evolution equations, and we apply these to study the nonlinear stability of analytically known force-free solutions for the first time. We implement our method in a new pseudo-spectral code built on top of the SpEC code for evolving dynamic spacetimes. Finally, we revisit these known solutions and attempt to clarify some interesting properties that render them analytically tractable. Finally, we preview some new work that similarly revisits the established approach to solving another problem in numerical relativity: the post-merger recoil from asymmetric gravitational-wave emission. These new results may have significant implications for the parameter dependence of recoils, and consequently on the statistical expectations for recoil velocities of merged systems.
Implementing Nepal's national building code—A case study in patience and persistence
Arendt, Lucy; Hortacsu, Ayse; Jaiswal, Kishor; Bevington, John; Shrestha, Surya; Lanning, Forrest; Mentor-William, Garmalia; Naeem, Ghazala; Thibert, Kate
2017-01-01
The April 2015 Gorkha Nepal earthquake revealed the relative effectiveness of the Nepal Standard, or national building code (NBC), and irregular compliance with it in different parts of Nepal. Much of the damage to more than half a million Nepal's residential structures may be attributed to the prevalence of owner-built or owner-supervised construction and the lack of owner and builder responsiveness to seismic risk and training in the appropriate means of complying with the NBC. To explain these circumstances, we review the protracted implementation of the NBC and the role played by one organization, the National Society for Earthquake Technology-Nepal (NSET), in the NBC's implementation. We also share observations on building code compliance made by individuals in Nepal participating in workshops led by the Earthquake Engineering Research Institute's 2014 class of Housner Fellows.
From chemical metabolism to life: the origin of the genetic coding process
2017-01-01
Looking for origins is so much rooted in ideology that most studies reflect opinions that fail to explore the first realistic scenarios. To be sure, trying to understand the origins of life should be based on what we know of current chemistry in the solar system and beyond. There, amino acids and very small compounds such as carbon dioxide, dihydrogen or dinitrogen and their immediate derivatives are ubiquitous. Surface-based chemical metabolism using these basic chemicals is the most likely beginning in which amino acids, coenzymes and phosphate-based small carbon molecules were built up. Nucleotides, and of course RNAs, must have come to being much later. As a consequence, the key question to account for life is to understand how chemical metabolism that began with amino acids progressively shaped into a coding process involving RNAs. Here I explore the role of building up complementarity rules as the first information-based process that allowed for the genetic code to emerge, after RNAs were substituted to surfaces to carry over the basic metabolic pathways that drive the pursuit of life. PMID:28684991
Schuliar, Yves; Chapenoire, Stéphane; Miras, Alain; Contrand, Benjamin; Lagarde, Emmanuel
2014-09-01
For investigation of air disasters, crash reconstruction is obtained using data from flight recorders, physical evidence from the site, and injuries patterns of the victims. This article describes a new software, Crash Injury Pattern Assessment Tool (CIPAT), to code and analyze injuries. The coding system was derived from the Abbreviated Injury Score (AIS). Scores were created corresponding to the amount of energy required causing the trauma (ER), and the software was developed to compute summary variables related to the position (assigned seat) of victims. A dataset was built from the postmortem examination of 154/228 victims of the Air France disaster (June 2009), recovered from the Atlantic Ocean after a complex and difficult task at a depth of 12790 ft. The use of CIPAT allowed to precise cause and circumstances of deaths and confirmed major dynamics parameters of the crash event established by the French Civil Aviation Safety Investigation Authority. © 2014 American Academy of Forensic Sciences.
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
Perez, Claudio I; Chansangpetch, Sunee; Thai, Andy; Nguyen, Anh-Hien; Nguyen, Anwell; Mora, Marta; Nguyen, Ngoc; Lin, Shan C
2018-06-05
Evaluate the distribution and the color probability codes of the peripapillary retinal nerve fiber layer (RNFL) and macular ganglion cell-inner plexiform layer (GCIPL) thickness in a healthy Vietnamese population and compare them with the original color-codes provided by the Cirrus spectral domain OCT. Cross-sectional study. We recruited non-glaucomatous Vietnamese subjects and constructed a normative database for peripapillary RNFL and macular GCIPL thickness. The probability color-codes for each decade of age were calculated. We evaluated the agreement with Kappa coefficient (κ) between OCT color probability codes with Cirrus built-in original normative database and the Vietnamese normative database. 149 eyes of 149 subjects were included. The mean age of enrollees was 60.77 (±11.09) years, with a mean spherical equivalent of +0.65 (±1.58) D and mean axial length of 23.4 (±0.87) mm. Average RNFL thickness was 97.86 (±9.19) microns and average macular GCIPL was 82.49 (±6.09) microns. Agreement between original and adjusted normative database for RNFL was fair for average and inferior quadrant (κ=0.25 and 0.2, respectively); and good for other quadrants (range: κ=0.63-0.73). For macular GCIPL κ agreement ranged between 0.39 and 0.69. After adjusting with the normative Vietnamese database, the percent of yellow and red color-codes increased significantly for peripapillary RNFL thickness. Vietnamese population has a thicker RNFL in comparison with Cirrus normative database. This leads to a poor color-code agreement in average and inferior quadrant between the original and adjusted database. These findings should encourage to create a peripapillary RNFL normative database for each ethnicity.
Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.
The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code thatmore » is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.« less
NCAD, a database integrating the intrinsic conformational preferences of non-coded amino acids
Revilla-López, Guillem; Torras, Juan; Curcó, David; Casanovas, Jordi; Calaza, M. Isabel; Zanuy, David; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Grodzinski, Piotr; Alemán, Carlos
2010-01-01
Peptides and proteins find an ever-increasing number of applications in the biomedical and materials engineering fields. The use of non-proteinogenic amino acids endowed with diverse physicochemical and structural features opens the possibility to design proteins and peptides with novel properties and functions. Moreover, non-proteinogenic residues are particularly useful to control the three-dimensional arrangement of peptidic chains, which is a crucial issue for most applications. However, information regarding such amino acids –also called non-coded, non-canonical or non-standard– is usually scattered among publications specialized in quite diverse fields as well as in patents. Making all these data useful to the scientific community requires new tools and a framework for their assembly and coherent organization. We have successfully compiled, organized and built a database (NCAD, Non-Coded Amino acids Database) containing information about the intrinsic conformational preferences of non-proteinogenic residues determined by quantum mechanical calculations, as well as bibliographic information about their synthesis, physical and spectroscopic characterization, conformational propensities established experimentally, and applications. The architecture of the database is presented in this work together with the first family of non-coded residues included, namely, α-tetrasubstituted α-amino acids. Furthermore, the NCAD usefulness is demonstrated through a test-case application example. PMID:20455555
Fortran graphics routines for the Macintosh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shore, B.W.
1992-06-01
The Language Systems MPW Fortran is a popular Fortran compiler for the Macintosh. Unfortunately, it does not have any built-in calls to graphics routines (such as are available with Graflib on the NLTSS), so there is no simple way to make x-y plots from calls within Fortran. Instead, a file of data must be created and a commercial plotting routine (such as IGOR or KALEIDAGRAPH) or a spreadsheet with graphics (such as WINGZ) must be applied to post-process the data. The Macintosh does have available many built-in calls (to the Macintosh Toolbox) that allow drawing shapes and lines with quickdraw,more » but these are not designed for plotting functions and are difficult to learn to use. This work outlines some Fortran routines that can be called from LS Fortran to make the necessary calls to the Macintosh toolbox to create simple two-dimensional plots or contour plots. The source code DEMOGRAF.F shows how these routines may be used. DEMOGRAF.F simply demonstrates some Fortran subroutines that can be called with language systems MPW Fortran on the Macintosh to plot arrays of numbers. The subroutines essentially mimic the functionality that has been available at LTSS and NLTSS and UNICOS at LLNL. The graphics primitives are kept in four separate files, each containing several subroutines. The subroutines are compiled and stored in a library file, LIBgraf.o. Makefile is used to link this library to the source code. A discussion is included on requirements for interactive plotting of functions.« less
Physics and engineering design of the accelerator and electron dump for SPIDER
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.
2011-06-01
The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.
Mobile Monitoring and Embedded Control System for Factory Environment
Lian, Kuang-Yow; Hsiao, Sung-Jung; Sung, Wen-Tsai
2013-01-01
This paper proposes a real-time method to carry out the monitoring of factory zone temperatures, humidity and air quality using smart phones. At the same time, the system detects possible flames, and analyzes and monitors electrical load. The monitoring also includes detecting the vibrations of operating machinery in the factory area. The research proposes using ZigBee and Wi-Fi protocol intelligent monitoring system integration within the entire plant framework. The sensors on the factory site deliver messages and real-time sensing data to an integrated embedded systems via the ZigBee protocol. The integrated embedded system is built by the open-source 32-bit ARM (Advanced RISC Machine) core Arduino Due module, where the network control codes are built in for the ARM chipset integrated controller. The intelligent integrated controller is able to instantly provide numerical analysis results according to the received data from the ZigBee sensors. The Android APP and web-based platform are used to show measurement results. The built-up system will transfer these results to a specified cloud device using the TCP/IP protocol. Finally, the Fast Fourier Transform (FFT) approach is used to analyze the power loads in the factory zones. Moreover, Near Field Communication (NFC) technology is used to carry out the actual electricity load experiments using smart phones. PMID:24351642
Mobile monitoring and embedded control system for factory environment.
Lian, Kuang-Yow; Hsiao, Sung-Jung; Sung, Wen-Tsai
2013-12-17
This paper proposes a real-time method to carry out the monitoring of factory zone temperatures, humidity and air quality using smart phones. At the same time, the system detects possible flames, and analyzes and monitors electrical load. The monitoring also includes detecting the vibrations of operating machinery in the factory area. The research proposes using ZigBee and Wi-Fi protocol intelligent monitoring system integration within the entire plant framework. The sensors on the factory site deliver messages and real-time sensing data to an integrated embedded systems via the ZigBee protocol. The integrated embedded system is built by the open-source 32-bit ARM (Advanced RISC Machine) core Arduino Due module, where the network control codes are built in for the ARM chipset integrated controller. The intelligent integrated controller is able to instantly provide numerical analysis results according to the received data from the ZigBee sensors. The Android APP and web-based platform are used to show measurement results. The built-up system will transfer these results to a specified cloud device using the TCP/IP protocol. Finally, the Fast Fourier Transform (FFT) approach is used to analyze the power loads in the factory zones. Moreover, Near Field Communication (NFC) technology is used to carry out the actual electricity load experiments using smart phones.
Annual Performance Evaluation of a Pair of Energy Efficient Houses (WC3 and WC4) in Oak Ridge, TN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Kaushik; Christian, Jeffrey E; Gehl, Anthony C
2012-04-01
Beginning in 2008, two pairs of energy-saver houses were built at Wolf Creek in Oak Ridge, TN. These houses were designed to maximize energy efficiency using new ultra-high-efficiency components emerging from ORNL s Cooperative Research and Development Agreement (CRADA) partners and others. The first two houses contained 3713 square feet of conditioned area and were designated as WC1 and WC2; the second pair consisted of 2721 square feet conditioned area with crawlspace foundation and they re called WC3 and WC4. This report is focused on the annual energy performance of WC3 and WC4, and how they compare against a previouslymore » benchmarked maximum energy efficient house of a similar footprint. WC3 and WC4 are both about 55-60% more efficient than traditional new construction. Each house showcases a different envelope system: WC3 is built with advanced framing featured cellulose insulation partially mixed with phase change materials (PCM); and WC4 house has cladding composed of an exterior insulation and finish system (EIFS). The previously benchmarked house was one of three built at the Campbell Creek subdivision in Knoxville, TN. This house (CC3) was designed as a transformation of a builder house (CC1) with the most advanced energy-efficiency features, including solar electricity and hot water, which market conditions are likely to permit within the 2012 2015 period. The builder house itself was representative of a standard, IECC 2006 code-certified, all-electric house built by the builder to sell around 2005 2008.« less
Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Campbell, Monica; Frank, Lawrence D
2014-07-11
To create and apply an empirically based health and greenhouse gas (GHG) impact assessment tool linking detailed measures of walkability and regional accessibility with travel, physical activity, health indicators and GHG emissions. Parcel land use and transportation system characteristics were calculated within a kilometre network buffer around each Toronto postal code. Built environment measures were linked with health and demographic characteristics from the Canadian Community Health Survey and travel behaviour from the Transportation Tomorrow Survey. Results were incorporated into an existing software tool and used to predict health-related indicators and GHG emissions for the Toronto West Don Lands Redevelopment. Walkability, regional accessibility, sidewalks, bike facilities and recreation facility access were positively associated with physical activity and negatively related to body weight, high blood pressure and transportation impacts. When applied to the West Don Lands, the software tool predicted a substantial shift from automobile use to walking, biking and transit. Walking and biking trips more than doubled, and transit trips increased by one third. Per capita automobile trips decreased by half, and vehicle kilometres travelled and GHG emissions decreased by 15% and 29%, respectively. The results presented are novel and among the first to link health outcomes with detailed built environment features in Canada. The resulting tool is the first of its kind in Canada. This tool can help policy-makers, land use and transportation planners, and health practitioners to evaluate built environment influences on health-related indicators and GHG emissions resulting from contrasting land use and transportation policies and actions.
Code subspaces for LLM geometries
NASA Astrophysics Data System (ADS)
Berenstein, David; Miller, Alexandra
2018-03-01
We consider effective field theory around classical background geometries with a gauge theory dual, specifically those in the class of LLM geometries. These are dual to half-BPS states of N= 4 SYM. We find that the language of code subspaces is natural for discussing the set of nearby states, which are built by acting with effective fields on these backgrounds. This work extends our previous work by going beyond the strict infinite N limit. We further discuss how one can extract the topology of the state beyond N→∞ and find that, as before, uncertainty and entanglement entropy calculations provide a useful tool to do so. Finally, we discuss obstructions to writing down a globally defined metric operator. We find that the answer depends on the choice of reference state that one starts with. Therefore, within this setup, there is ambiguity in trying to write an operator that describes the metric globally.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Increasing productivity through Total Reuse Management (TRM)
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Total Reuse Management (TRM) is a new concept currently being promoted by the NASA Langley Software Engineering and Ada Lab (SEAL). It uses concepts similar to those promoted in Total Quality Management (TQM). Both technical and management personnel are continually encouraged to think in terms of reuse. Reuse is not something that is aimed for after a product is completed, but rather it is built into the product from inception through development. Lowering software development costs, reducing risk, and increasing code reliability are the more prominent goals of TRM. Procedures and methods used to adopt and apply TRM are described. Reuse is frequently thought of as only being applicable to code. However, reuse can apply to all products and all phases of the software life cycle. These products include management and quality assurance plans, designs, and testing procedures. Specific examples of successfully reused products are given and future goals are discussed.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-02-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Real-space processing of helical filaments in SPARX
Behrmann, Elmar; Tao, Guozhi; Stokes, David L.; Egelman, Edward H.; Raunser, Stefan; Penczek, Pawel A.
2012-01-01
We present a major revision of the iterative helical real-space refinement (IHRSR) procedure and its implementation in the SPARX single particle image processing environment. We built on over a decade of experience with IHRSR helical structure determination and we took advantage of the flexible SPARX infrastructure to arrive at an implementation that offers ease of use, flexibility in designing helical structure determination strategy, and high computational efficiency. We introduced the 3D projection matching code which now is able to work with non-cubic volumes, the geometry better suited for long helical filaments, we enhanced procedures for establishing helical symmetry parameters, and we parallelized the code using distributed memory paradigm. Additional feature includes a graphical user interface that facilitates entering and editing of parameters controlling the structure determination strategy of the program. In addition, we present a novel approach to detect and evaluate structural heterogeneity due to conformer mixtures that takes advantage of helical structure redundancy. PMID:22248449
Helical vortices: viscous dynamics and instability
NASA Astrophysics Data System (ADS)
Rossi, Maurice; Selcuk, Can; Delbende, Ivan; Ijlra-Upmc Team; Limsi-Cnrs Team
2014-11-01
Understanding the dynamical properties of helical vortices is of great importance for numerous applications such as wind turbines, helicopter rotors, ship propellers. Locally these flows often display a helical symmetry: fields are invariant through combined axial translation of distance Δz and rotation of angle θ = Δz / L around the same z-axis, where 2 πL denotes the helix pitch. A DNS code with built-in helical symmetry has been developed in order to compute viscous quasi-steady basic states with one or multiple vortices. These states will be characterized (core structure, ellipticity, ...) as a function of the pitch, without or with an axial flow component. The instability modes growing in the above base flows and their growth rates are investigated by a linearized version of the DNS code coupled to an Arnoldi procedure. This analysis is complemented by a helical thin-cored vortex filaments model. ANR HELIX.
Multiscale Multifunctional Progressive Fracture of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Minnetyan, L.
2012-01-01
A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Technical Reports Server (NTRS)
Kamat, Manohar P.; Watson, Brian C.
1992-01-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
The development and evaluation of a new coding system for medical records.
Papazissis, Elias
2014-01-01
The present study aims to develop a simple, reliable and easy tool enabling clinicians to codify the major part of individualized medical details (patient history and findings of physical examination) quickly and easily in routine medical practice, by entering data to a purpose-built software application, using structure data elements and detailed medical illustrations. We studied medical records of 9,320 patients and we extracted individualized medical details. We recorded the majority of symptoms and the majority of findings of physical examination into the system, which was named IMPACT® (Intelligent Medical Patient Record and Coding Tool). Subsequently the system was evaluated by clinicians, based on the examination of 1206 patients. The evaluation results showed that IMPACT® is an efficient tool, easy to use even under time-pressing conditions. IMPACT® seems to be a promising tool for illustration-guided, structured data entry of medical narrative, in electronic patient records.
A Design for Composing and Extending Vehicle Models
NASA Technical Reports Server (NTRS)
Madden, Michael M.; Neuhaus, Jason R.
2003-01-01
The Systems Development Branch (SDB) at NASA Langley Research Center (LaRC) creates simulation software products for research. Each product consists of an aircraft model with experiment extensions. SDB treats its aircraft models as reusable components, upon which experiments can be built. SDB has evolved aircraft model design with the following goals: 1. Avoid polluting the aircraft model with experiment code. 2. Discourage the copy and tailor method of reuse. The current evolution of that architecture accomplishes these goals by reducing experiment creation to extend and compose. The architecture mechanizes the operational concerns of the model's subsystems and encapsulates them in an interface inherited by all subsystems. Generic operational code exercises the subsystems through the shared interface. An experiment is thus defined by the collection of subsystems that it creates ("compose"). Teams can modify the aircraft subsystems for the experiment using inheritance and polymorphism to create variants ("extend").
Security Hardened Cyber Components for Nuclear Power Plants: Phase I SBIR Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franusich, Michael D.
SpiralGen, Inc. built a proof-of-concept toolkit for enhancing the cyber security of nuclear power plants and other critical infrastructure with high-assurance instrumentation and control code. The toolkit is based on technology from the DARPA High-Assurance Cyber Military Systems (HACMS) program, which has focused on applying the science of formal methods to the formidable set of problems involved in securing cyber physical systems. The primary challenges beyond HACMS in developing this toolkit were to make the new technology usable by control system engineers and compatible with the regulatory and commercial constraints of the nuclear power industry. The toolkit, packaged as amore » Simulink add-on, allows a system designer to assemble a high-assurance component from formally specified and proven blocks and generate provably correct control and monitor code for that subsystem.« less
Robot Task Commander with Extensible Programming Environment
NASA Technical Reports Server (NTRS)
Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)
2014-01-01
A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
LBE water interaction in sub-critical reactors: First experimental and modelling results
NASA Astrophysics Data System (ADS)
Ciampichetti, A.; Agostini, P.; Benamati, G.; Bandini, G.; Pellini, D.; Forgione, N.; Oriolo, F.; Ambrosini, W.
2008-06-01
This paper concerns the study of the phenomena involved in the interaction between LBE and pressurised water which could occur in some hypothetical accidents in accelerator driven system type reactors. The LIFUS 5 facility was designed and built at ENEA-Brasimone to reproduce this kind of interaction in a wide range of conditions. The first test of the experimental program was carried out injecting water at 70 bar and 235 °C in a reaction vessel containing LBE at 1 bar and 350 °C. A pressurisation up to 80 bar was observed in the test section during the considered transient. The SIMMER III code was used to simulate the performed test. The calculated data agree in a satisfactory way with the experimental results giving confidence in the possibility to use this code for safety analyses of heavy liquid metal cooled reactors.
Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel
The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.