Hamming and Accumulator Codes Concatenated with MPSK or QAM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel
2009-01-01
In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.
The use of interleaving for reducing radio loss in convolutionally coded systems
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Yuen, J. H.
1989-01-01
The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.
Fok, Mable P; Prucnal, Paul R
2009-05-01
All-optical encryption for optical code-division multiple-access systems with interleaved waveband-switching modulation is experimentally demonstrated. The scheme explores dual-pump four-wave mixing in a 35 cm highly nonlinear bismuth oxide fiber to achieve XOR operation of the plaintext and the encryption key. Bit 0 and bit 1 of the encrypted data are represented by two different wavebands. Unlike on-off keying encryption methods, the encrypted data in this approach has the same intensity for both bit 0 and bit 1. Thus no plaintext or ciphertext signatures are observed.
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted
2010-09-13
We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan
2004-06-05
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
Serial turbo trellis coded modulation using a serially concatenated coder
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)
2010-01-01
Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Simultaneous storage of medical images in the spatial and frequency domain: A comparative study
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan
2004-01-01
Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
NASA Astrophysics Data System (ADS)
Deng, Rui; Yu, Jianjun; He, Jing; Wei, Yiran
2018-05-01
In this paper, we experimentally demonstrated a complete real-time 4-level pulse amplitude modulation (PAM-4) Q-band radio-over-fiber (RoF) system with optical heterodyning and envelope detector (ED) down-conversion. Meanwhile, a cost-efficient real-time implementation scheme of cascaded multi-modulus algorithm (CMMA) equalization is proposed in this paper. By using the proposed scheme, the CMMA equalization is applied in the system for signal recovery. In addition, to improve the transmission performance of the system, an interleaved Reed-Solomon (RS) code is applied in the real-time system. Although there is serious power impulse noise in the system, the system can still achieve a bit error rate (BER) at below 1 × 10-7 after 25 km standard single mode fiber (SSMF) transmission and 1-m wireless transmission.
Capacity Maximizing Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Jones, Christopher
2010-01-01
Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity
The Composite Analytic and Simulation Package or RFI (CASPR) on a coded channel
NASA Technical Reports Server (NTRS)
Freedman, Jeff; Berman, Ted
1993-01-01
CASPR is an analysis package which determines the performance of a coded signal in the presence of Radio Frequency Interference (RFI) and Additive White Gaussian Noise (AWGN). It can analyze a system with convolutional coding, Reed-Solomon (RS) coding, or a concatenation of the two. The signals can either be interleaved or non-interleaved. The model measures the system performance in terms of either the E(sub b)/N(sub 0) required to achieve a given Bit Error Rate (BER) or the BER needed for a constant E(sub b)/N(sub 0).
A digital audio/video interleaving system. [for Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Richards, R. W.
1978-01-01
A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Betz, J.W.; Blanco, M.A.; Cahn, C.R.; Dafesh, P.A.; Hegarty, C.J.; Hudnut, K.W.; Kasemsri, V.; Keegan, R.; Kovach, K.; Lenahan, L.S.; Ma, H.H.; Rushanan, J.J.; Sklar, D.; Stansell, T.A.; Wang, C.C.; Yi, S.K.
2006-01-01
Detailed design of the modernized LI civil signal (L1C) signal has been completed, and the resulting draft Interface Specification IS-GPS-800 was released in Spring 2006. The novel characteristics of the optimized L1C signal design provide advanced capabilities while offering to receiver designers considerable flexibility in how to use these capabilities. L1C provides a number of advanced features, including: 75% of power in a pilot component for enhanced signal tracking, advanced Weilbased spreading codes, an overlay code on the pilot that provides data message synchronization, support for improved reading of clock and ephemeris by combining message symbols across messages, advanced forward error control coding, and data symbol interleaving to combat fading. The resulting design offers receiver designers the opportunity to obtain unmatched performance in many ways. This paper describes the design of L1C. A summary of LIC's background and history is provided. The signal description then proceeds with the overall signal structure consisting of a pilot component and a carrier component. The new L1C spreading code family is described, along with the logic used for generating these spreading codes. Overlay codes on the pilot channel are also described, as is the logic used for generating the overlay codes. Spreading modulation characteristics are summarized. The data message structure is also presented, showing the format for providing time, ephemeris, and system data to users, along with features that enable receivers to perform code combining. Encoding of rapidly changing time bits is described, as are the Low Density Parity Check codes used for forward error control of slowly changing time bits, clock, ephemeris, and system data. The structure of the interleaver is also presented. A summary of L 1C's unique features and their benefits is provided, along with a discussion of the plan for L1C implementation.
Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B
2012-04-09
Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-03-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-06-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
Compact storage of medical images with patient information.
Acharya, R; Anand, D; Bhat, S; Niranjan, U C
2001-12-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
The use of interleaving for reducing radio loss in trellis-coded modulation systems
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1989-01-01
It is demonstrated how the use of interleaving/deinterleaving in trellis-coded modulation (TCM) systems can reduce the signal-to-noise ratio loss due to imperfect carrier demodulation references. Both the discrete carrier (phase-locked loop) and suppressed carrier (Costas loop) cases are considered and the differences between the two are clearly demonstrated by numerical results. These results are of great importance for future communication links to the Deep Space Network (DSN), especially from high Earth orbiters, which may be bandwidth limited.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Trellis coded modulation for 4800-9600 bps transmission over a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1986-01-01
The combination of trellis coding and multiple phase-shift-keyed (MPSK) signalling with the addition of asymmetry to the signal set is discussed with regard to its suitability as a modulation/coding scheme for the fading mobile satellite channel. For MPSK, introducing nonuniformity (asymmetry) into the spacing between signal points in the constellation buys a further improvement in performance over that achievable with trellis coded symmetric MPSK, all this without increasing average or peak power, or changing the bandwidth constraints imposed on the system. Whereas previous contributions have considered the performance of trellis coded modulation transmitted over an additive white Gaussian noise (AWGN) channel, the emphasis in the paper is on the performance of trellis coded MPSK in the fading environment. The results will be obtained by using a combination of analysis and simulation. It will be assumed that the effect of the fading on the phase of the received signal is fully compensated for either by tracking it with some form of phase-locked loop or with pilot tone calibration techniques. Thus, results will reflect only the degradation due to the effect of the fading on the amplitude of the received signal. Also, we shall consider only the case where interleaving/deinterleaving is employed to further combat the fading. This allows for considerable simplification of the analysis and is of great practical interest. Finally, the impact of the availability of channel state information on average bit error probability performance is assessed.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
The 40 Gbps cascaded bit-interleaving PON
NASA Astrophysics Data System (ADS)
Vyncke, A.; Torfs, G.; Van Praet, C.; Verbeke, M.; Duque, A.; Suvakovic, D.; Chow, H. K.; Yin, X.
2015-12-01
In this paper, a 40 Gbps cascaded bit-interleaving passive optical network (CBI-PON) is proposed to achieve power reduction in the network. The massive number of devices in the access network makes that power consumption reduction in this part of the network has a major impact on the total network power consumption. Starting from the proven BiPON technology, an extension to this concept is proposed to introduce multiple levels of bit-interleaving. The paper discusses the CBI protocol in detail, as well as an ASIC implementation of the required custom CBI Repeater and End-ONT. From the measurements of this first 40 Gbps ASIC prototype, power consumption reduction estimates are presented.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Development and characterisation of FPGA modems using forward error correction for FSOC
NASA Astrophysics Data System (ADS)
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
Code division multiple access signaling for modulated reflector technology
Briles, Scott D [Los Alamos, NM
2012-05-01
A method and apparatus for utilizing code division multiple access in modulated reflectance transmissions comprises the steps of generating a phase-modulated reflectance data bit stream; modifying the modulated reflectance data bit stream; providing the modified modulated reflectance data bit stream to a switch that connects an antenna to an infinite impedance in the event a "+1" is to be sent, or connects the antenna to ground in the event a "0" or a "-1" is to be sent.
Hybrid concatenated codes and iterative decoding
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)
2000-01-01
Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.
Performance of convolutionally encoded noncoherent MFSK modem in fading channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1976-01-01
The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.
The performance of trellis coded multilevel DPSK on a fading mobile satellite channel
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1987-01-01
The performance of trellis coded multilevel differential phase-shift-keying (MDPSK) over Rician and Rayleigh fading channels is discussed. For operation at L-Band, this signalling technique leads to a more robust system than the coherent system with dual pilot tone calibration previously proposed for UHF. The results are obtained using a combination of analysis and simulation. The analysis shows that the design criterion for trellis codes to be operated on fading channels with interleaving/deinterleaving is no longer free Euclidean distance. The correct design criterion for optimizing bit error probability of trellis coded MDPSK over fading channels will be presented along with examples illustrating its application.
NASA Astrophysics Data System (ADS)
Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.
2010-04-01
A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
NASA Technical Reports Server (NTRS)
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
NASA Astrophysics Data System (ADS)
Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi
2018-02-01
The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.
Least reliable bits coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Wagner, Paul
1992-01-01
LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-07-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-04-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1991-01-01
Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
NASA Astrophysics Data System (ADS)
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.
NASA Technical Reports Server (NTRS)
1981-01-01
A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.
Multiple trellis coded modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)
1990-01-01
A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.
Hybrid spread spectrum radio system
Smith, Stephen F.; Dress, William B.
2010-02-02
Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.
Modulation and coding for fast fading mobile satellite communication channels
NASA Technical Reports Server (NTRS)
Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.
1988-01-01
The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.
A bandwidth efficient coding scheme for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J., Jr.
1991-01-01
As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.
The detection and extraction of interleaved code segments
NASA Technical Reports Server (NTRS)
Rugaber, Spencer; Stirewalt, Kurt; Wills, Linda M.
1995-01-01
This project is concerned with a specific difficulty that arises when trying to understand and modify computer programs. In particular, it is concerned with the phenomenon of 'interleaving' in which one section of a program accomplishes several purposes, and disentangling the code responsible for each purposes is difficult. Unraveling interleaved code involves discovering the purpose of each strand of computation, as well as understanding why the programmer decided to interleave the strands. Increased understanding improve the productivity and quality of software maintenance, enhancement, and documentation activities. It is the goal of the project to characterize the phenomenon of interleaving as a prerequisite for building tools to detect and extract interleaved code fragments.
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong
2010-12-20
In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.
NASA Astrophysics Data System (ADS)
Wu, Tonggen; Ma, Jianxin
2017-12-01
This paper proposes an original scheme to generate the photonic dual-tone optical millimeter wave (MMW) carrying the 16-star quadrature-amplitude-modulation (QAM) signal via an optical phase modulator (PM) and an interleaver with adaptive photonic frequency-nonupling without phase precoding. To enable the generated optical vector MMW signal to resist the power fading effect caused by the fiber chromatic dispersion, the modulated -5th- and +4th-order sidebands are selected from the output of the PM, which is driven by the precoding 16-star QAM signal. The modulation index of the PM is optimized to gain the maximum opto-electrical conversion efficiency. A radio over fiber link is built by simulation, and the simulated constellations and the bit error rate graph demonstrate that the frequency-nonupling 16-star QAM MMW signal has good transmission performance. The simulation results agree well with our theoretical results.
NASA Astrophysics Data System (ADS)
Andreotti, Riccardo; Del Fiorentino, Paolo; Giannetti, Filippo; Lottici, Vincenzo
2016-12-01
This work proposes a distributed resource allocation (RA) algorithm for packet bit-interleaved coded OFDM transmissions in the uplink of heterogeneous networks (HetNets), characterized by small cells deployed over a macrocell area and sharing the same band. Every user allocates its transmission resources, i.e., bits per active subcarrier, coding rate, and power per subcarrier, to minimize the power consumption while both guaranteeing a target quality of service (QoS) and accounting for the interference inflicted by other users transmitting over the same band. The QoS consists of the number of information bits delivered in error-free packets per unit of time, or goodput (GP), estimated at the transmitter by resorting to an efficient effective SNR mapping technique. First, the RA problem is solved in the point-to-point case, thus deriving an approximate yet accurate closed-form expression for the power allocation (PA). Then, the interference-limited HetNet case is examined, where the RA problem is described as a non-cooperative game, providing a solution in terms of generalized Nash equilibrium. Thanks to the closed-form of the PA, the solution analysis is based on the best response concept. Hence, sufficient conditions for existence and uniqueness of the solution are analytically derived, along with a distributed algorithm capable of reaching the game equilibrium.
Flexible video conference system based on ASICs and DSPs
NASA Astrophysics Data System (ADS)
Hu, Qiang; Yu, Songyu
1995-02-01
In this paper, a video conference system we developed recently is presented. In this system the video codec is compatible with CCITT H.261, the audio codec is compatible with G.711 and G.722, the channel interface circuit is designed according to CCITT H.221. In this paper emphasis is given to the video codec, which is both flexible and robust. The video codec is based on LSI LOGIC Corporation's L64700 series video compression chipset. The main function blocks of H.261, such as DCT, motion estimation, VLC, VLD, are performed by this chipset, but the chipset is a nude chipset, no peripheral function, such as memory interface, is integrated into it, this results in great difficulty to implement the system. To implement the frame buffer controller, a DSP-TMS 320c25 and a group of GALs is used, SRAM is used as a current and previous frame buffer, the DSP is not only the controller of the frame buffer, it's also the controller of the whole video codec. Because of the use of the DSP, the architecture of the video codec is very flexible, many system parameters can be reconfigured for different applications. The architecture of the whole video codec is a streamline structure. In H.261, BCH(511,493) coding is recommended to work against random errors in transmission, but if burst error occurs, it causes serious result. To solve this problem, an interleaving method is used, that means the BCH code is interleaved before it's transmitted, in the receiver it is interleaved again and the bit stream is in the original order, but the error bits are distributed into several BCH words, and the BCH decoder is able to correct it. Considering that extreme conditions may occur, a function block is implemented which is somewhat like a watchdog, it assures that the receiver can recover from errors no matter what serious error occurs in transmission. In developing the video conference system, a new synchronization problem must be solved, the monitor on the receiver can't be easily synchronized with the camera on another side, a new method is described in detail which can solve this problem successfully.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4
2012-09-26
defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
Transmission and storage of medical images with patient information.
Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo
2003-07-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.
A high-speed BCI based on code modulation VEP
NASA Astrophysics Data System (ADS)
Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai
2011-04-01
Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.
Multifunction audio digitizer for communications systems
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr.
1971-01-01
Digitizer accomplishes both N bit pulse code modulation /PCM/ and delta modulation, and provides modulation indicating variable signal gain and variable sidetone. Other features include - low package count, variable clock rate to optimize bandwidth, and easily expanded PCM output.
Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee
2016-11-20
Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
2006-08-25
interleaving schemes defined in 802.11a standard, although only 6 Mbps data rate with BPSK and 1/2 Convolutional coding and puncturing is used in our...16-QAM/64-QAM Convolutional Code K = 7 (64 states) K = 7 (64 states) Coding Rates 1/2, 2/3, 3/4 1/2, 2/3, 3/4 Channel Spacing (MHz) 20 10 Signal...Since 3G systems need to be backward compatible with 2G systems, they are a combination of existing and evolved equipments with data rate up to 2 Mbps
Optimized iterative decoding method for TPC coded CPM
NASA Astrophysics Data System (ADS)
Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei
2018-05-01
Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
NASA Technical Reports Server (NTRS)
Couvillon, L. A., Jr.; Carl, C.; Goldstein, R. M.; Posner, E. C.; Green, R. R. (Inventor)
1973-01-01
A method and apparatus are described for synchronizing a received PCM communications signal without requiring a separate synchronizing channel. The technique provides digital correlation of the received signal with a reference signal, first with its unmodulated subcarrier and then with a bit sync code modulated subcarrier, where the code sequence length is equal in duration to each data bit.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface.
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-02-08
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements "00", "01", "10", and "11", respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-01-01
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements “00”, “01”, “10”, and “11”, respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source. PMID:28176870
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
The NASA Spacecraft Transponding Modem
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Kayalar, Selahattin; Perret, Jonathan D.
2000-01-01
A new deep space transponder is being developed by the Jet Propulsion Laboratory for NASA. The Spacecraft Transponding Modem (STM) implements the standard transponder functions and the channel service functions that have previously resided in spacecraft Command/Data Subsystems. The STM uses custom ASICs, MMICs, and MCMs to reduce the active device parts count to 70, mass to I kg, and volume to 524 cc. The first STMs will be flown on missions launching in the 2003 time frame. The STM tracks an X-band uplink signal and provides both X-band and Ka-band downlinks, either coherent or non-coherent with the uplink. A NASA standard Command Detector Unit is integrated into the STM, along with a codeblock processor and a hardware command decoder. The decoded command codeblocks are output to the spacecraft command/data subsystem. Virtual Channel 0 (VC-0) (hardware) commands are processed and output as critical controller (CRC) commands. Downlink telemetry is received from the spacecraft data subsystem as telemetry frames. The STM provides the following downlink coding options: the standard CCSDS (7-1/2) convolutional coding, ReedSolomon coding with interleave depths one and five, (15-1/6) convolutional coding, and Turbo coding with rates 1/3 and 1/6. The downlink symbol rates can be linearly ramped to match the G/T curve of the receiving station, providing up to a 1 dB increase in data return. Data rates range from 5 bits per second (bps) to 24 Mbps, with three modulation modes provided: modulated subcarrier (3 different frequencies provided), biphase-L modulated direct on carrier, and Offset QPSK. Also, the capability to generate one of four non-harmonically related telemetry beacon tones is provided, to allow for a simple spacecraft status monitoring scheme for cruise phases of missions. Three ranging modes are provided: standard turn around ranging, regenerative pseudo-noise (PN) ranging, and Differential One-way Ranging (DOR) tones. The regenerative ranging provides the capability of increasing the ground received ranging SNR by up to 30 dB. Two different avionics interfaces to the command/data subsystem's data bus are provided: a MIL STD 1553B bus or an industry standard PCI interface. Digital interfaces provide the capability to control antenna selection (e.g., switching between high gain and low gain antennas) and antenna pointing (for future steered Ka-band antennas).
NASA Astrophysics Data System (ADS)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.
Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S
2012-05-21
We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.
NASA Technical Reports Server (NTRS)
Wang, C.-W.; Stark, W.
2005-01-01
This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
New Bandwidth Efficient Parallel Concatenated Coding Schemes
NASA Technical Reports Server (NTRS)
Denedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.
1996-01-01
We propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable iterative decoding structure. Examples are given for throughputs 2 bits/sec/Hz with 8PSK and 16QAM signal constellations.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
BeiDou Signal Acquisition with Neumann–Hoffman Code Modulation in a Degraded Channel
Zhao, Lin; Liu, Aimeng; Ding, Jicheng; Wang, Jing
2017-01-01
With the modernization of global navigation satellite systems (GNSS), secondary codes, also known as the Neumann–Hoffman (NH) codes, are modulated on the satellite signal to obtain a better positioning performance. However, this leads to an attenuation of the acquisition sensitivity of classic integration algorithms because of the frequent bit transitions that refer to the NH codes. Taking weak BeiDou navigation satellite system (BDS) signals as objects, the present study analyzes the side effect of NH codes on acquisition in detail and derives a straightforward formula, which indicates that bit transitions decrease the frequency accuracy. To meet the requirement of carrier-tracking loop initialization, a frequency recalculation algorithm is proposed based on verified fast Fourier transform (FFT) to mitigate the effect, meanwhile, the starting point of NH codes is found. Then, a differential correction is utilized to improve the acquisition accuracy of code phase. Monte Carlo simulations and real BDS data tests demonstrate that the new structure is superior to the conventional algorithms both in detection probability and frequency accuracy in a degraded channel. PMID:28208776
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-01-01
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-03-30
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.
Survey Probe Infrared Celestial Experiment (SPICE).
1985-01-01
amplitude modulation (PAM) Word clock Bit clock 3.1.2.3.2 Each signal shall be buffered and short- circuit proofed and capable of delivering a signal...TABLE OF CONTENTS I Appendix A I SPICE II Electronic Test Report Appendix B VC409-OOOl Pulse Code Modulator Specification - SPICE I VC409-OOOl-21 Pulse...Code Modulator Specification - SPICE II Appendix C - Specifications AA0209-103 Evacuation AA0209-104 Cryogen Filling AA0209-105 Leak Rate AA0209-106
NASA Astrophysics Data System (ADS)
Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng
2018-01-01
In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.
Use of biphase-coded pulses for wideband data storage in time-domain optical memories.
Shen, X A; Kachru, R
1993-06-10
We demonstrate that temporally long laser pulses with appropriate phase modulation can replace either temporally brief or frequency-chirped pulses in a time-domain optical memory to store and retrieve information. A 1.65-µs-long write pulse was biphase modulated according to the 13-bit Barker code for storing multiple bits of optical data into a Pr(3+):YAlO(3) crystal, and the stored information was later recalled faithfully by using a read pulse that was identical to the write pulse. Our results further show that the stored data cannot be retrieved faithfully if mismatched write and read pulses are used. This finding opens up the possibility of designing encrypted optical memories for secure data storage.
Universal Decoder for PPM of any Order
NASA Technical Reports Server (NTRS)
Moision, Bruce E.
2010-01-01
A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
NASA Astrophysics Data System (ADS)
Villa, Carlos; Kumavor, Patrick; Donkor, Eric
2008-04-01
Photonics Analog-to-Digital Converters (ADCs) utilize a train of optical pulses to sample an electrical input waveform applied to an electrooptic modulator or a reverse biased photodiode. In the former, the resulting train of amplitude-modulated optical pulses is detected (converter to electrical) and quantized using a conversional electronics ADC- as at present there are no practical, cost-effective optical quantizers available with performance that rival electronic quantizers. In the latter, the electrical samples are directly quantized by the electronics ADC. In both cases however, the sampling rate is limited by the speed with which the electronics ADC can quantize the electrical samples. One way to increase the sampling rate by a factor N is by using the time-interleaved technique which consists of a parallel array of N electrical ADC converters, which have the same sampling rate but different sampling phase. Each operating at a quantization rate of fs/N where fs is the aggregated sampling rate. In a system with no real-time operation, the N channels digital outputs are stored in memory, and then aggregated (multiplexed) to obtain the digital representation of the analog input waveform. Alternatively, for real-time operation systems the reduction of storing time in the multiplexing process is desired to improve the time response of the ADC. The complete elimination of memories come expenses of concurrent timing and synchronization in the aggregation of the digital signal that became critical for a good digital representation of the analog signal waveform. In this paper we propose and demonstrate a novel optically synchronized encoder and multiplexer scheme for interleaved photonics ADCs that utilize the N optical signals used to sample different phases of an analog input signal to synchronize the multiplexing of the resulting N digital output channels in a single digital output port. As a proof of concept, four 320 Megasamples/sec 12-bit of resolution digital signals were multiplexed to form an aggregated 1.28 Gigasamples/sec single digital output signal.
Secure Communication via a Recycling of Attenuated Classical Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, IV, Amos M.
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
Secure Communication via a Recycling of Attenuated Classical Signals
Smith, IV, Amos M.
2017-01-12
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
FPGA based digital phase-coding quantum key distribution system
NASA Astrophysics Data System (ADS)
Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu
2015-12-01
Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Study of the OCDMA Transmission Characteristics in FSO-FTTH at Various Distances, Outdoor
NASA Astrophysics Data System (ADS)
Aldouri, Muthana Y.; Aljunid, S. A.; Fadhil, Hilal A.
2013-06-01
It is important to apply the field Programmable Gate Array (FPGA), and Optical Switch technology as an encoder and decoder for Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) Free Space Optic Fiber to the Home (FSO-FTTH) transmitter and receiver system design. The encoder and decoder module will be using FPGA as a code generator, optical switch using as encode and decode of optical source. This module was tested by using the Modified Double Weight (MDW) code, which is selected as an excellent candidate because it had shown superior performance were by the total noise is reduced. It is also easy to construct and can reduce the number of filters required at a receiver by a newly proposed detection scheme known as AND Subtraction technique. MDW code is presented here to support Fiber-To-The-Home (FTTH) access network in Point-To-Multi-Point (P2MP) application. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. The performances are characterized through BER and bit rate (BR), also, the received power at a variety of bit rates.
An 8-PSK TDMA uplink modulation and coding system
NASA Technical Reports Server (NTRS)
Ames, S. A.
1992-01-01
The combination of 8-phase shift keying (8PSK) modulation and greater than 2 bits/sec/Hz drove the design of the Nyquist filter to one specified to have a rolloff factor of 0.2. This filter when built and tested was found to produce too much intersymbol interference and was abandoned for a design with a rolloff factor of 0.4. The preamble is limited to 100 bit periods of the uncoded bit period of 5 ns for a maximum preamble length of 500 ns or 40 8PSK symbol times at 12.5 ns per symbol. For 8PSK modulation, the required maximum degradation of 1 dB in -20 dB cochannel interference (CCI) drove the requirement for forward error correction coding. In this contract, the funding was not sufficient to develop the proposed codec so the codec was limited to a paper design during the preliminary design phase. The mechanization of the demodulator is digital, starting from the output of the analog to digital converters which quantize the outputs of the quadrature phase detectors. This approach is amenable to an application specific integrated circuit (ASIC) replacement in the next phase of development.
Time delay and distance measurement
NASA Technical Reports Server (NTRS)
Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)
2011-01-01
A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Interleaved concatenated codes: new perspectives on approaching the Shannon limit.
Viterbi, A J; Viterbi, A M; Sindhushayana, N T
1997-09-02
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
Application of a Design Space Exploration Tool to Enhance Interleaver Generation
2009-06-24
2], originally dedicated to channel coding, are being currently reused in a large set of the whole digital communication systems (e.g. equalization... originally target interface synthesis, is shown to be also suited to the interleaver design space exploration. Our design flow can take as input...slice turbo codes,” in Proc. 3rd Int. Symp. Turbo Codes, Related Topics, Brest , 2003, pp. 343–346. [11] IEEE 802.15.3a, WPAN High Rate Alternative [12
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback
NASA Astrophysics Data System (ADS)
Mehlführer, Christian; Caban, Sebastian; Rupp, Markus
2007-12-01
We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.
DarkBit: a GAMBIT module for computing dark matter observables and likelihoods
NASA Astrophysics Data System (ADS)
Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-12-01
We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Kintner, Jr., Paul M. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor)
2007-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
NASA Technical Reports Server (NTRS)
Psiaki, Mark L. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor); Kintner, Jr., Paul M. (Inventor)
2006-01-01
A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.
Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-900 series system
NASA Technical Reports Server (NTRS)
Raphael, David
1995-01-01
This handbook explicates the hardware and software properties of a time division multiplex system. This system is used to sample analog and digital data. The data is then merged with frame synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information in this handbook is required by users to design congruous interface and attest effective utilization of this encoder system. Aydin Vector provides all of the components for these systems to Goddard Space Flight Center/Wallops Flight Facility.
Interleaved concatenated codes: New perspectives on approaching the Shannon limit
Viterbi, A. J.; Viterbi, A. M.; Sindhushayana, N. T.
1997-01-01
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit. PMID:11038568
Performance of Low-Density Parity-Check Coded Modulation
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.
In-pixel conversion with a 10 bit SAR ADC for next generation X-ray FELs
NASA Astrophysics Data System (ADS)
Lodola, L.; Batignani, G.; Benkechkache, M. A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G. F.; Fabris, L.; Forti, F.; Grassi, M.; Latreche, S.; Malcovati, P.; Manghisoni, M.; Mendicino, R.; Morsani, F.; Paladino, A.; Pancheri, L.; Paoloni, E.; Ratti, L.; Re, V.; Rizzo, G.; Traversi, G.; Vacchi, C.; Verzellesi, G.; Xu, H.
2016-07-01
This work presents the design of an interleaved Successive Approximation Register (SAR) ADC, part of the readout channel for the PixFEL detector. The PixFEL project aims at substantially advancing the state-of-the-art in the field of 2D X-ray imaging for applications at the next generation Free Electron Laser (FEL) facilities. For this purpose, the collaboration is developing the fundamental microelectronic building blocks for the readout channel. This work focuses on the design of the ADC carried out in a 65 nm CMOS technology. To obtain a good tradeoff between power consumption, conversion speed and area occupation, an interleaved SAR ADC architecture was adopted.
Trellis coding with Continuous Phase Modulation (CPM) for satellite-based land-mobile communications
NASA Technical Reports Server (NTRS)
1989-01-01
This volume of the final report summarizes the results of our studies on the satellite-based mobile communications project. It includes: a detailed analysis, design, and simulations of trellis coded, full/partial response CPM signals with/without interleaving over various Rician fading channels; analysis and simulation of computational cutoff rates for coherent, noncoherent, and differential detection of CPM signals; optimization of the complete transmission system; analysis and simulation of power spectrum of the CPM signals; design and development of a class of Doppler frequency shift estimators; design and development of a symbol timing recovery circuit; and breadboard implementation of the transmission system. Studies prove the suitability of the CPM system for mobile communications.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
NASA Technical Reports Server (NTRS)
Boyd, R. W.; Hartman, W. F.
1992-01-01
The project's objective is to develop an advanced high speed coding technology that provides substantial coding gains with limited bandwidth expansion for several common modulation types. The resulting technique is applicable to several continuous and burst communication environments. Decoding provides a significant gain with hard decisions alone and can utilize soft decision information when available from the demodulator to increase the coding gain. The hard decision codec will be implemented using a single application specific integrated circuit (ASIC) chip. It will be capable of coding and decoding as well as some formatting and synchronization functions at data rates up to 300 megabits per second (Mb/s). Code rate is a function of the block length and can vary from 7/8 to 15/16. Length of coded bursts can be any multiple of 32 that is greater than or equal to 256 bits. Coding may be switched in or out on a burst by burst basis with no change in the throughput delay. Reliability information in the form of 3-bit (8-level) soft decisions, can be exploited using applique circuitry around the hard decision codec. This applique circuitry will be discrete logic in the present contract. However, ease of transition to LSI is one of the design guidelines. Discussed here is the selected coding technique. Its application to some communication systems is described. Performance with 4, 8, and 16-ary Phase Shift Keying (PSK) modulation is also presented.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
A full-duplex CATV/wireless-over-fiber lightwave transmission system.
Li, Chung-Yi; Lu, Hai-Han; Ying, Cheng-Ling; Cheng, Chun-Jen; Lin, Che-Yu; Wan, Zhi-Wei; Chen, Jian-Hua
2015-04-06
A full-duplex CATV/wireless-over-fiber lightwave transmission system consisting of one broadband light source (BLS), two optical interleavers (ILs), one intensity modulator, and one phase modulator is proposed and experimentally demonstrated. The downstream light is optically promoted from 10Gbps/25GHz microwave (MW) data signal to 10Gbps/100GHz and 10Gbps/50GHz millimeter-wave (MMW) data signals in fiber-wireless convergence, and intensity-modulated with 50-550 MHz CATV signal. For up-link transmission, the downstream light is phase-remodulated with 10Gbps/25GHz MW data signal in fiber-wireless convergence. Over a 40-km single-mode fiber (SMF) and a 10-m radio frequency (RF) wireless transport, bit error rate (BER), carrier-to-noise ratio (CNR), composite second-order (CSO), and composite triple-beat (CTB) are observed to perform well in such full-duplex CATV/wireless-over-fiber lightwave transmission systems. This full-duplex 100-GHz/50-GHz/25-GHz/550-MHz lightwave transmission system is an attractive alternative. This transmission system not only presents its advancement in the integration of fiber backbone and CATV/wireless feeder networks, but also it provides the advantages of a communication channel for higher data rates and bandwidth.
Algorithms for a very high speed universal noiseless coding module
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Yeh, Pen-Shu
1991-01-01
The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
Parallel Processing of Big Point Clouds Using Z-Order Partitioning
NASA Astrophysics Data System (ADS)
Alis, C.; Boehm, J.; Liu, K.
2016-06-01
As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm for a hemispherical and a triangular wave point cloud.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-600 series system
NASA Technical Reports Server (NTRS)
Currier, S. F.; Powell, W. R.
1986-01-01
The hardware and software characteristics of a time division multiplex system are described. The system is used to sample analog and digital data. The data is merged with synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information presented herein is required by users to design compatible interfaces and assure effective utilization of this encoder system. GSFC/Wallops Flight Facility has flown approximately 50 of these systems through 1984 on sounding rockets with no inflight failures. Aydin Vector manufactures all of the components for these systems.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
Next generation PET data acquisition architectures
NASA Astrophysics Data System (ADS)
Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.
1997-06-01
New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.
Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity
2007-09-01
transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure
A 10 GS/s time-interleaved ADC in 0.25 micrometer CMOS technology
NASA Astrophysics Data System (ADS)
Aytar, Oktay; Tangel, Ali; Afacan, Engin
2017-11-01
This paper presents design and simulation of a 4-bit 10 GS/s time interleaved ADC in 0.25 micrometer CMOS technology. The designed TI-ADC has 4 channels including 4-bit flash ADC in each channel, in which area and power efficiency are targeted. Therefore, basic standard cell logic gates are preferred. Meanwhile, the aspect ratios in the gate designs are kept as small as possible considering the speed performance. In the literature, design details of the timing control circuits have not been provided, whereas the proposed timing control process is comprehensively explained and design details of the proposed timing control process are clearly presented in this study. The proposed circuits producing consecutive pulses for timing control of the input S/H switches (ie the analog demultiplexer front-end circuitry) and the very fast digital multiplexer unit at the output are the main contributions of this study. The simulation results include +0.26/-0.22 LSB of DNL and +0.01/-0.44 LSB of INL, layout area of 0.27 mm2, and power consumption of 270 mW. The provided power consumption, DNL and INL measures are observed at 100 MHz input with 10 GS/s sampling rate.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
NASA Astrophysics Data System (ADS)
Pang, Yongqiang; Li, Yongfeng; Zhang, Jieqiu; Chen, Hongya; Xu, Zhuo; Qu, Shaobo
2018-06-01
Anisotropic transmissive coding metamaterials (CMMs) have been designed and demonstrated in this work. High-efficiency transmission with the amplitudes close to unity is achieved by ultrathin metallic tapered blade structures, on which incident waves can be highly coupled into spoof surface plasmon polaritons (SSPPs). The transmission phase can be therefore manipulated with much freedom by designing the dispersion of the SSPPs. These tapered blade structures are designed as the anisotropic unit cells of the CMMs. Two 1-bit anisotropic CMMs with different coding sequences were first designed and simulated, and then a 2-bit anisotropic CMM was designed and measured experimentally. The measured results agree well with the simulations. It is expected that this work provides an alternative method for designing the transmissive CMMs, and may find potential applications in the beam forming technique.
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
NASA Astrophysics Data System (ADS)
Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar
2012-12-01
Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.
Bit selection using field drilling data and mathematical investigation
NASA Astrophysics Data System (ADS)
Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.
2018-03-01
A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
Soft-information flipping approach in multi-head multi-track BPMR systems
NASA Astrophysics Data System (ADS)
Warisarn, C.; Busyatras, W.; Myint, L. M. M.
2018-05-01
Inter-track interference is one of the most severe impairments in bit-patterned media recording system. This impairment can be effectively handled by a modulation code and a multi-head array jointly processing multiple tracks; however, such a modulation constraint has never been utilized to improve the soft-information. Therefore, this paper proposes the utilization of modulation codes with an encoded constraint defined by the criteria for soft-information flipping during a three-track data detection process. Moreover, we also investigate the optimal offset position of readheads to provide the most improvement in system performance. The simulation results indicate that the proposed systems with and without position jitter are significantly superior to uncoded systems.
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results
NASA Astrophysics Data System (ADS)
Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.
2005-12-01
We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.
Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M
2017-11-08
When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Wei, Qingguo; Liu, Yonghui; Gao, Xiaorong; Wang, Yijun; Yang, Chen; Lu, Zongwu; Gong, Huayuan
2018-06-01
In an existing brain-computer interface (BCI) based on code modulated visual evoked potentials (c-VEP), a method with which to increase the number of targets without increasing code length has not yet been established. In this paper, a novel c-VEP BCI paradigm, namely, grouping modulation with different codes that have good autocorrelation and crosscorrelation properties, is presented to increase the number of targets and information transfer rate (ITR). All stimulus targets are divided into several groups and each group of targets are modulated by a distinct pseudorandom binary code and its circularly shifting codes. Canonical correlation analysis is applied to each group for yielding a spatial filter and templates for all targets in a group are constructed based on spatially filtered signals. Template matching is applied to each group and the attended target is recognized by finding the maximal correlation coefficients of all groups. Based on the paradigm, a BCI with a total of 48 targets divided into three groups was implemented; 12 and 10 subjects participated in an off-line and a simulated online experiments, respectively. Data analysis of the offline experiment showed that the paradigm can massively increase the number of targets from 16 to 48 at the cost of slight compromise in accuracy (95.49% vs. 92.85%). Results of the simulated online experiment suggested that although the averaged accuracy across subjects of all three groups of targets was lower than that of a single group of targets (91.67% vs. 94.9%), the average ITR of the former was substantially higher than that of the later (181 bits/min vs. 135.6 bit/min) due to the large increase of the number of targets. The proposed paradigm significantly improves the performance of the c-VEP BCI, and thereby facilitates its practical applications such as high-speed spelling.
Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.
Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J
2011-12-01
The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy formore » switch interleaving of paralleled dc-dc buck converters.« less
NASA Astrophysics Data System (ADS)
Lin, Wen-Piao; Wu, He-Long
2005-08-01
We propose a fiber-Bragg-grating (FBG)-based optical code-division multiple access passive optical network (OCDMA-PON) using a dual-baseband modulation scheme. A mathematical model is developed to study the performance of this scheme. According to the analyzed results, this scheme can allow a tolerance of the spectral power distortion (SPD) ratio of 25% with a bit error rate (BER) of 10-9 when the modified pseudorandom noise (PN) code length is 16. Moreover, we set up a simulated system to evaluate the baseband and radio frequency (RF) band transmission characteristics. The simulation results demonstrate that our proposed OCDMA-PON can provide a cost-effective and scalable fiber-to-the-home solution.
NASA Astrophysics Data System (ADS)
Yeh, Tse-Liang; Dmitriev, Alexei; Chu, Yen-Hsyang; Jiang, Shyh-Biau; Chen, Li-Wu
The NCU-SWIP - Space Weather Instrumentation Payload is developed for simultaneous in-situ and remote measurement of space weather parameters for cross verifications. The measurements include in-situ electron density, electron temperature, magnetic field, the deceleration of satellite due to neutral wind, and remotely the linear cumulative intensities of oxygen ion air-glows at 135.6nm and 630.0nm along the flight path in forward, nader, and backward directions for tomographic reconstruction of the electron density distribution underneath. This instrument package is suitable for micro satellite constellation to establish nominal space weather profiles and, thus, to detect abnormal variations as the signs of ionospheric disturbances induced by severe atmospheric weather, or earth quake - mantle movement through their Lithosphere-Atmosphere-Ionosphere Coupling Mechanism. NCU-SWIP is constructed with intelligent sensor modules connected by common bus with their functionalities managed by an efficient distributed real-time system LUTOS. The same hierarchy can be applied to the level of satellite constellation. For example SWIP's in a constellation in coordination with the GNSS Occultation Experiment TriG planned for the Formosa-7 constellation, data can be cross correlated for verification and refinement for real-time, stable and reliable measurements. A SWIP will be contributed to the construction of a MAI Micro Satellite for verification. The SWIP consists of two separate modules: the SWIP main control module and the SWIP-PMTomo sensor module. They are respectively a 1.5kg W120xL120xH100 (in mm) box with forward facing 120mmPhi circular disk probe on a boom top edged at 470mm height and a 7.2kg W126xL590x372H (in mm) slab containing 3 legs looking downwards along the flight path, while consuming the maximum electricity of 10W and 12W. The sensors are 1) ETPEDP measuring 16bits floating potentials for electron temperature range of 1000K to 3000K and 24bits electron current for electron density range of 10 ({(10) to 12)}/cm (3) ; 2) PMTomo counting air glow photons up to 24bits after a pre-scalar of 10 with dwelling time of 30ns and effective maximum capacity of 3.3e7 counts/s and efficiency of 3e5 counts/s/pw, 3) 3D magnetometers, an internal MRM and an external MRMx behind the ETPEDP probe, measuring +,-2 Gauss in 16bits and 24bits resolutions respectively; and 4) G_G measuring 3D acceleration +,-2g and gyroscopic rate +,-2000deg/s in 16bits. In the nominal mode, SWIP interleaves sensor module data sampling triggers for minimum mutual interferences and at 1Hz cycle of two 0.5sec phases whence sampled once in each phase are MRM, MRMx, ETPEDP in ETP and EDP mode alternatively. Also are sampled PMTomo and G_G once a 1Hz cycle. This batch data throughput is 1.24MB/hr. In addition, magnetometer burst mode of 40Hz or 100Hz can be turned on by command with additional through put of 2.6MB/hr or 6.3MB/hr respectively for special event profiling.
Zhou, Xian; Zhong, Kangping; Gao, Yuliang; Sui, Qi; Dong, Zhenghua; Yuan, Jinhui; Wang, Liang; Long, Keping; Lau, Alan Pak Tao; Lu, Chao
2015-04-06
Discrete multi-tone (DMT) modulation is an attractive modulation format for short-reach applications to achieve the best use of available channel bandwidth and signal noise ratio (SNR). In order to realize polarization-multiplexed DMT modulation with direct detection, we derive an analytical transmission model for dual polarizations with intensity modulation and direct diction (IM-DD) in this paper. Based on the model, we propose a novel polarization-interleave-multiplexed DMT modulation with direct diction (PIM-DMT-DD) transmission system, where the polarization de-multiplexing can be achieved by using a simple multiple-input-multiple-output (MIMO) equalizer and the transmission performance is optimized over two distinct received polarization states to eliminate the singularity issue of MIMO demultiplexing algorithms. The feasibility and effectiveness of the proposed PIM-DMT-DD system are investigated via theoretical analyses and simulation studies.
Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting
2010-06-21
In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel-connected dc-dc buck converters. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform and no communication between different controllers is needed. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work presents themore » first fully decentralized strategy for switch interleaving in paralleled dc-dc buck converters.« less
Waveguide-type optical circuits for recognition of optical 8QAM-coded label
NASA Astrophysics Data System (ADS)
Surenkhorol, Tumendemberel; Kishikawa, Hiroki; Goto, Nobuo; Gonchigsumlaa, Khishigjargal
2017-10-01
Optical signal processing is expected to be applied in network nodes. In photonic routers, label recognition is one of the important functions. We have studied different kinds of label recognition methods so far for on-off keying, binary phase-shift keying, quadrature phase-shift keying, and 16 quadrature amplitude modulation-coded labels. We propose a method based on waveguide circuits to recognize an optical eight quadrature amplitude modulation (8QAM)-coded label by simple passive optical signal processing. The recognition of the proposed method is theoretically analyzed and numerically simulated by the finite difference beam propagation method. The noise tolerance is discussed, and bit-error rate against optical signal-to-noise ratio is evaluated. The scalability of the proposed method is also discussed theoretically for two-symbol length 8QAM-coded labels.
NASA Technical Reports Server (NTRS)
Rodriguez, R. M.
1975-01-01
The Balloon-Borne Ultraviolet Stellar Spectrometer (BUSS) Science Data Docummutation Program (BAPS48) is a pulse code modulation docummutation program that will format the BUSS science data contained on a one inch PCM tracking tape into a seven track serial bit stream formatted digital tape.
Design the RS(255,239) encoder and interleaving in the space laser communication system
NASA Astrophysics Data System (ADS)
Lang, Yue; Tong, Shou-feng
2013-08-01
Space laser communication is researched by more and more countries. Space laser communication deserves to be researched. We can acquire higher transmission speed and better transmission quality between satellite and satellite, satellite and earth by setting up laser link. But in the space laser communication system,the reliability is under influences of many factors of atmosphere,detector noise, optical platform jitter and other factors. The intensity of the signal which is attenuated because of the long transmission distance is demanded to have higher intensity to acquire low BER. The channel code technology can enhance the anti-interference ability of the system. The theory of channel coding technology is that some redundancies is added to information codes. So it can make use of the checkout polynomial to correct errors at the sink port. It help the system to get low BER rate and coding gain. Reed-Solomon (RS) code is one of the channel code, and it is one kind of multi-ary BCH code, and it can correct both burst errors and random errors, and it is widely used in the error-control schemes. The new method of the RS encoder and interleaving based on the FPGA is proposed, aiming at satisfying the needs of the widely-used error control technology in the space laser communication field. An improved method for Finite Galois Field multiplier of encoding is proposed, and it is suitable for FPGA implementation. Comparison of the XOR gates cost between the optimization and original, the number of XOR gates is lessen more than 40% .Then give a new structure of interleaving by using the FPGA. By controlling the in-data stream and out-data stream of encoder, the asynchronous process of the whole frame is accomplished, while by using multi-level pipeline, the real-time transfer of the data is achieved. By controlling the read-address and write-address of the block RAM, the interleaving operation of the arbitrary depth is synchronously implemented. Compared with the normal method, it could reduce the complexity of the channel encoder and the hardware requirement effectively.
Algorithms for high-speed universal noiseless coding
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Yeh, Pen-Shu; Miller, Warner
1993-01-01
This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.
A Pulse Code Modulated Fiber Optic Link Design for Quinault Under-Water Tracking Range.
1980-09-01
invented and patented a light-wave communications device, the Photophone . The light beam was acoustically modulated, transmitted through the atmosphere and...a load resistor or feedback resistor. This voltage can be cal- culated by multiplying the received power, the respcnsiv ity and the effective load...frequency is not real critical since the clock, in effect , is synchronized after every eight bits by the timing pulse. The more interesting part of the
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A.
1991-01-01
Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong
2015-08-24
A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
High-dimensional free-space optical communications based on orbital angular momentum coding
NASA Astrophysics Data System (ADS)
Zou, Li; Gu, Xiaofan; Wang, Le
2018-03-01
In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
Kozak, M; Karaman, M
2001-07-01
Digital beamforming based on oversampled delta-sigma (delta sigma) analog-to-digital (A/D) conversion can reduce the overall cost, size, and power consumption of phased array front-end processing. The signal resampling involved in dynamic delta sigma beamforming, however, disrupts synchronization between the modulators and demodulator, causing significant degradation in the signal-to-noise ratio. As a solution to this, we have explored a new digital beamforming approach based on non-uniform oversampling delta sigma A/D conversion. Using this approach, the echo signals received by the transducer array are sampled at time instants determined by the beamforming timing and then digitized by single-bit delta sigma A/D conversion prior to the coherent beam summation. The timing information involves a non-uniform sampling scheme employing different clocks at each array channel. The delta sigma coded beamsums obtained by adding the delayed 1-bit coded RF echo signals are then processed through a decimation filter to produce final beamforming outputs. The performance and validity of the proposed beamforming approach are assessed by means of emulations using experimental raw RF data.
Novel space-time trellis codes for free-space optical communications using transmit laser selection.
García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2015-09-21
In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
NASA Technical Reports Server (NTRS)
Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.
1996-01-01
The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.
Link performance optimization for digital satellite broadcasting systems
NASA Astrophysics Data System (ADS)
de Gaudenzi, R.; Elia, C.; Viola, R.
The authors introduce the concept of digital direct satellite broadcasting (D-DBS), which allows unprecedented flexibility by providing a large number of audiovisual services. The concept assumes an information rate of 40 Mb/s, which is compatible with practically all present-day transponders. After discussion of the general system concept, the results of transmission system optimization are presented. Channel and interference effects are taken into account. Numerical results show that the scheme with the best performance is trellis-coded 8-PSK (phase shift keying) modulation concatenated with Reed-Solomon block code. For a net data rate of 40 Mb/s a bit error rate of 10-10 can be achieved with an equivalent bit energy to noise density of 9.5 dB, including channel, interference, and demodulator impairments. A link budget analysis shows how a medium-power direct-to-home TV satellite can provide multimedia services to users equipped with small (60-cm) dish antennas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rey, D.; Ryan, W.; Ross, M.
A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, wasmore » developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.« less
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
2017-07-01
8-3 8.4.1 Characteristics of a Singular Composite Output Signal ...................................... 8-3 8.5 Single Bus Track Spread Recording ...Format .............................................................. 8-5 8.5.1 Single Bus Recording Technique Characteristics...check FCS frame check sequence HDDR high-density digital recording MIL-STD Military Standard msb most significant bit PCM pulse code modulation
Lossless compression of AVIRIS data: Comparison of methods and instrument constraints
NASA Technical Reports Server (NTRS)
Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.
1992-01-01
A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.
Note: optical receiver system for 152-channel magnetoencephalography.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2014-11-01
An optical receiver system composing 13 serial data restore/synchronizer modules and a single module combiner converted optical 32-bit serial data into 32-bit synchronous parallel data for a computer to acquire 152-channel magnetoencephalography (MEG) signals. A serial data restore/synchronizer module identified 32-bit channel-voltage bits from 48-bit streaming serial data, and then consecutively reproduced 13 times of 32-bit serial data, acting in a synchronous clock. After selecting a single among 13 reproduced data in each module, a module combiner converted it into 32-bit parallel data, which were carried to 32-port digital input board in a computer. When the receiver system together with optical transmitters were applied to 152-channel superconducting quantum interference device sensors, this MEG system maintained a field noise level of 3 fT/√Hz @ 100 Hz at a sample rate of 1 kSample/s per channel.
Iterative Demodulation and Decoding of Non-Square QAM
NASA Technical Reports Server (NTRS)
Li, Lifang; Divsalar, Dariush; Dolinar, Samuel
2004-01-01
It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
The best bits in an iris code.
Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J
2009-06-01
Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
Bit-Wise Arithmetic Coding For Compression Of Data
NASA Technical Reports Server (NTRS)
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
A Concurrent Smalltalk Compiler for the Message-Driven Processor
1988-05-01
apj with bits from low-bit (inclusive) to high-bit (exclusive) set. ;;;Low-bit defaults to zero. (defmacro brange (high-bit &optional low-bit) (list...n2) (null (cddr num))) (aetg bits (b+ bits (if (>- nl n2) ( brange (1+ nl) n2) ( brange (1+ n2) ni)))) (error "Bad bmap range: -S" flu.)))) (t (error...vlocs) flat ((vlive (b- finst-vllv* mast) *I.( brange firat-context-slot-nun))) (next (inst-next last))) (if (bempty vlive) (delete-module module inat
Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian
2018-05-01
We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.
A Study of a Standard BIT Circuit.
1977-02-01
IENDED BIT APPROACHES FOR QED MODULES AND APPLICATION OF THE ANALYTIC MEASURES 36 4.1 Built-In-Test for Memory Class Modules 37 4.1.1 Random Access...Implementation 68 4.1.5.5 Criti cal Parameters 68 4.1.5.6 QED Module Test Equipment Requirements 68 4.1.6 Application of Analytic Measures to the...Microprocessor BIT Techniques.. 121 4.2.9 Application of Analytic Measures to the Recommended BIT App roaches 125 4.2.10 Process Class BIT by Partial
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
Antman, Yair; Yaron, Lior; Langer, Tomi; Tur, Moshe; Levanon, Nadav; Zadok, Avi
2013-11-15
Dynamic Brillouin gratings (DBGs), inscribed by comodulating two writing pump waves with a perfect Golomb code, are demonstrated and characterized experimentally. Compared with pseudo-random bit sequence (PRBS) modulation of the pump waves, the Golomb code provides lower off-peak reflectivity due to the unique properties of its cyclic autocorrelation function. Golomb-coded DBGs allow the long variable delay of one-time probe waveforms with higher signal-to-noise ratios, and without averaging. As an example, the variable delay of return-to-zero, on-off keyed data at a 1 Gbit/s rate, by as much as 10 ns, is demonstrated successfully. The eye diagram of the reflected waveform remains open, whereas PRBS modulation of the pump waves results in a closed eye. The variable delay of data at 2.5 Gbit/s is reported as well, with a marginally open eye diagram. The experimental results are in good agreement with simulations.
Advanced coding and modulation schemes for TDRSS
NASA Technical Reports Server (NTRS)
Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan
1993-01-01
This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Spatial domain entertainment audio decompression/compression
NASA Astrophysics Data System (ADS)
Chan, Y. K.; Tam, Ka Him K.
2014-02-01
The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
Investigating the structure preserving encryption of high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Shahid, Zafar; Puech, William
2013-02-01
This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.
Terahertz wave manipulation based on multi-bit coding artificial electromagnetic surfaces
NASA Astrophysics Data System (ADS)
Li, Jiu-Sheng; Zhao, Ze-Jiang; Yao, Jian-Quan
2018-05-01
A polarization insensitive multi-bit coding artificial electromagnetic surface is proposed for terahertz wave manipulation. The coding artificial electromagnetic surfaces composed of four-arrow-shaped particles with certain coding sequences can generate multi-bit coding in the terahertz frequencies and manipulate the reflected terahertz waves to the numerous directions by using of different coding distributions. Furthermore, we demonstrate that our coding artificial electromagnetic surfaces have strong abilities to reduce the radar cross section with polarization insensitive for TE and TM incident terahertz waves as well as linear-polarized and circular-polarized terahertz waves. This work offers an effectively strategy to realize more powerful manipulation of terahertz wave.
Pulse Code Modulation (PCM) data storage and analysis using a microcomputer
NASA Technical Reports Server (NTRS)
Massey, D. E.
1986-01-01
The current widespread use of microcomputers has led to the creation of some very low-cost instrumentation. A Pulse Code Modulation (PCM) storage device/data analyzer -- a peripheral plug-in board especially constructed to enable a personal computer to store and analyze data from a PCM source -- was designed and built for use on the NASA Sounding Rocket Program for PMC encoder configuration and testing. This board and custom-written software turns a computer into a snapshot PCM decommutator which will accept and store many hundreds or thousands of PCM telemetry data frames, then sift through them repeatedly. These data can be converted to any number base and displayed, examined for any bit dropouts or changes (in particular, words or frames), graphically plotted, or statistically analyzed.
A zero-voltage-switched three-phase interleaved buck converter
NASA Astrophysics Data System (ADS)
Hsieh, Yao-Ching; Huang, Bing-Siang; Lin, Jing-Yuan; Pham, Phu Hieu; Chen, Po-Hao; Chiu, Huang-Jen
2018-04-01
This paper proposes a three-phase interleaved buck converter which is composed of three identical paralleled buck converters. The proposed solution has three shunt inductors connected between each other of three basic buck conversion units. With the help of the shunt inductors, the MOSFET parasitic capacitances will resonate to achieve zero-voltage-switching. Furthermore, the decreasing rate of the current through the free-wheeling diodes is limited, and therefore, their reverse-recovery losses can be minimised. The active power switches are controlled by interleaved pulse-width modulation signals to reduce the input and output current ripples. Therefore, the filtering capacitances on the input and output sides can be reduced. The power efficiency is measured to be as high as 98% in experiment with a prototype circuit.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
A Comparison of LBG and ADPCM Speech Compression Techniques
NASA Astrophysics Data System (ADS)
Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.
Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.
Optical domain analog to digital conversion methods and apparatus
Vawter, Gregory A
2014-05-13
Methods and apparatus for optical analog to digital conversion are disclosed. An optical signal is converted by mapping the optical analog signal onto a wavelength modulated optical beam, passing the mapped beam through interferometers to generate analog bit representation signals, and converting the analog bit representation signals into an optical digital signal. A photodiode receives an optical analog signal, a wavelength modulated laser coupled to the photodiode maps the optical analog signal to a wavelength modulated optical beam, interferometers produce an analog bit representation signal from the mapped wavelength modulated optical beam, and sample and threshold circuits corresponding to the interferometers produce a digital bit signal from the analog bit representation signal.
Bidirectional ultradense WDM for metro networks adopting the beat-frequency-locking method
NASA Astrophysics Data System (ADS)
Kim, Sang-Yuep; Lee, Jae-Hoon; Lee, Jae-Seung
2003-10-01
We present a technique to increase the spectral efficiencies of metro networks by using channel-interleaved bidirectional ultradense wavelength-division multiplexing (WDM) within each customer's optical band. As a demonstration, we transmit 12.5-GHz-spaced 8×10 Gbit/s channels achieving spectral efficiency as high as 0.8 bit/s/Hz with a 25-GHz WDM demultiplexer. The beat-frequency-locking method is used to stabilize the channel frequencies within +/-200 MHz, which is far more accurate than with conventional wavelength lockers.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Pneumatically Modulated Liquid Delivery System for Nebulizers
2011-12-02
VII. Acknowledgements 18 APPENDIX A: Complete Parts List 19 APPENDIX B: Source code for the Arduino Uno microcontroller (CD) 23 1 I...implemented. The Arduino Uno is a well-established hobbyist microcontroller, focused on ease-of-use and teaching non-computer programmers about embedded...circuits. The Arduino Uno uses an Atmega328 microcontroller with thirteen digital TTL control lines, six 10-bit resolution 0-5 V analog inputs, TTL
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.
2014-01-01
All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A
2014-11-24
All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
NASA Astrophysics Data System (ADS)
Walker, Ernest L.
1994-05-01
This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.
NASA Astrophysics Data System (ADS)
Rablau, Corneliu; Bredthauer, Lance
2007-10-01
Aside from the more traditional data, voice and e-mail communications, new bandwidth intensive applications in the larger consumer markets, such as music, digital pictures and movies, have led to an explosive increase in the demand for transmission capacity for optical communications networks. This has resulted in a widespread deployment of Dense Wavelength Division Multiplexing (DWDM) as a means of increasing the communications capacity by multiplexing and transmitting signals of different wavelengths (establishing multiple communication channels) through a single strand of fiber. We report on the design, assembly and characterization of a 50-GHz, 80-channel Mux-Demux module for DWDM systems. The module has been assembled from two commercially available 100 GHz, 40-channel Array Waveguide Grating (AWG) modules and a 50-GHz to 100-GHz interleaver. Relevant performance parameters such as insertion loss, channel uniformity, next-channel isolation (crosstalk) and integrated cross-talk are presented and discussed in contrast with the performance of other competing technologies such as Thin-Film-Filter-based Mux-Demux devices.
2008-12-01
The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
2016-08-01
codes contain control functions (CFs) that are reserved for encoding various controls, identification, and other special- purpose functions. Time...set of CF bits for the encoding of various control, identification, and other special- purpose functions. The control bits may be programmed in any... recycles yearly. • There are 18 CFs occur between position identifiers P6 and P8. Any CF bit or combination of bits can be programmed to read a
NASA Glenn Steady-State Heat Pipe Code GLENHP: Compilation for 64- and 32-Bit Windows Platforms
NASA Technical Reports Server (NTRS)
Tower, Leonard K.; Geng, Steven M.
2016-01-01
A new version of the NASA Glenn Steady State Heat Pipe Code, designated "GLENHP," is introduced here. This represents an update to the disk operating system (DOS) version LERCHP reported in NASA/TM-2000-209807. The new code operates on 32- and 64-bit Windows-based platforms from within the 32-bit command prompt window. An additional evaporator boundary condition and other features are provided.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Automated training site selection for large-area remote-sensing image analysis
NASA Astrophysics Data System (ADS)
McCaffrey, Thomas M.; Franklin, Steven E.
1993-11-01
A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.
NASA Astrophysics Data System (ADS)
Aikawa, Satoru; Nakamura, Yasuhisa; Takanashi, Hitoshi
1994-02-01
This paper describes the performance of an outage free SXH (Synchronous Digital Hierarchy) interface 256 QAM modem. An outage free DMR (Digital Microwave Radio) is achieved by a high coding gain trellis coded SPORT QAM and Super Multicarrier modem. A new frame format and its associated circuits connect the outage free modem to the SDH interface. The newly designed VLSI's are key devices for developing the modem. As an overall modem performance, BER (bit error rate) characteristics and equipment signatures are presented. A coding gain of 4.7 dB (at a BER of 10(exp -4)) is obtained using SPORT 256 QAM and Viterbi decoding. This coding gain is realized by trellis coding as well as by increasing of transmission rate. Roll-off factor is decreased to maintain the same frequency occupation and modulation level as ordinary SDH 256 QAM modern.
An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.
Lin, C F; Hung, S I; Chiang, I H
2010-10-01
In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
NASA Astrophysics Data System (ADS)
Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero
2016-10-01
In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.
Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1975-01-01
The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Software Tool Integrating Data Flow Diagrams and Petri Nets; Adaptive Nulling for Interferometric Detection of Planets; Reducing the Volume of NASA Earth-Science Data; Reception of Multiple Telemetry Signals via One Dish Antenna; Space-Qualified Traveling-Wave Tube; Smart Power Supply for Battery-Powered Systems; Parallel Processing of Broad-Band PPM Signals; Inexpensive Implementation of Many Strain Gauges; Constant-Differential-Pressure Two-Fluid Accumulator; Inflatable Tubular Structures Rigidized with Foams; Power Generator with Thermo-Differential Modules; Mechanical Extraction of Power From Ocean Currents and Tides; Nitrous Oxide/Paraffin Hybrid Rocket Engines; Optimized Li-Ion Electrolytes Containing Fluorinated Ester Co-Solvents; Probabilistic Multi-Factor Interaction Model for Complex Material Behavior; Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators; Compact Rare Earth Emitter Hollow Cathode; High-Precision Shape Control of In-Space Deployable Large Membrane/Thin-Shell Reflectors; Rapid Active Sampling Package; Miniature Lightweight Ion Pump; Cryogenic Transport of High-Pressure-System Recharge Gas; Water-Vapor Raman Lidar System Reaches Higher Altitude; Compact Ku-Band T/R Module for High-Resolution Radar Imaging of Cold Land Processes; Wide-Field-of-View, High-Resolution, Stereoscopic Imager; Electrical Capacitance Volume Tomography with High-Contrast Dielectrics; Wavefront Control and Image Restoration with Less Computing; Polarization Imaging Apparatus; Stereoscopic Machine-Vision System Using Projected Circles; Metal Vapor Arcing Risk Assessment Tool; Performance Bounds on Two Concatenated, Interleaved Codes; Parameterizing Coefficients of a POD-Based Dynamical System; Confidence-Based Feature Acquisition; Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery; Universal Decoder for PPM of any Order; Algorithm for Stabilizing a POD-Based Dynamical System; Mission Reliability Estimation for Repairable Robot Teams; Processing AIRS Scientific Data Through Level 3; Web-Based Requesting and Scheduling Use of Facilities; AutoGen Version 5.0; Time-Tag Generation Script; PPM Receiver Implemented in Software; Tropospheric Emission Spectrometer Product File Readers; Reporting Differences Between Spacecraft Sequence Files; Coordinating "Execute" Data for ISS and Space Shuttle; Database for Safety-Oriented Tracking of Chemicals; Apparatus for Cold, Pressurized Biogeochemical Experiments; Growing B Lymphocytes in a Three-Dimensional Culture System; Tissue-like 3D Assemblies of Human Broncho-Epithelial Cells; Isolation of Resistance-Bearing Microorganisms; Oscillating Cell Culture Bioreactor; and Liquid Cooling/Warming Garment.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Microinverters for employment in connection with photovoltaic modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lentine, Anthony L.; Nielson, Gregory N.; Okandan, Murat
2015-09-22
Microinverters useable in association with photovoltaic modules are described. A three phase-microinverter receives direct current output generated by a microsystems-enabled photovoltaic cell and converts such direct current output into three-phase alternating current out. The three-phase microinverter is interleaved with other three-phase-microinverters, wherein such microinverters are integrated in a photovoltaic module with the microsystems-enabled photovoltaic cell.
Zhou, Wen; Li, Xinying; Yu, Jianjun
2017-10-30
We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.
Multi-Modulator for Bandwidth-Efficient Communication
NASA Technical Reports Server (NTRS)
Gray, Andrew; Lee, Dennis; Lay, Norman; Cheetham, Craig; Fong, Wai; Yeh, Pen-Shu; King, Robin; Ghuman, Parminder; Hoy, Scott; Fisher, Dave
2009-01-01
A modulator circuit board has recently been developed to be used in conjunction with a vector modulator to generate any of a large number of modulations for bandwidth-efficient radio transmission of digital data signals at rates than can exceed 100 Mb/s. The modulations include quadrature phaseshift keying (QPSK), offset quadrature phase-shift keying (OQPSK), Gaussian minimum-shift keying (GMSK), and octonary phase-shift keying (8PSK) with square-root raised-cosine pulse shaping. The figure is a greatly simplified block diagram showing the relationship between the modulator board and the rest of the transmitter. The role of the modulator board is to encode the incoming data stream and to shape the resulting pulses, which are fed as inputs to the vector modulator. The combination of encoding and pulse shaping in a given application is chosen to maximize the bandwidth efficiency. The modulator board includes gallium arsenide serial-to-parallel converters at its input end. A complementary metal oxide/semiconductor (CMOS) field-programmable gate array (FPGA) performs the coding and modulation computations and utilizes parallel processing in doing so. The results of the parallel computation are combined and converted to pulse waveforms by use of gallium arsenide parallel-to-serial converters integrated with digital-to-analog converters. Without changing the hardware, one can configure the modulator to produce any of the designed combinations of coding and modulation by loading the appropriate bit configuration file into the FPGA.
Optimum Cyclic Redundancy Codes for Noisy Channels
NASA Technical Reports Server (NTRS)
Posner, E. C.; Merkey, P.
1986-01-01
Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.
NASA Astrophysics Data System (ADS)
Isaac, Aboagye Adjaye; Yongsheng, Cao; Fushen, Chen
2018-05-01
We present and compare the outcome of implicit and explicit labels using intensity modulation (IM), differential quadrature phase shift keying (DQPSK), and polarization division multiplexed (PDM-DQPSK). A payload bit rate of 1, 2, and 5 Gb/s is considered for IM implicit labels, while payloads of 40, 80, and 112 Gb/s are considered in DQPSK and PDM-DQPSK explicit labels by stimulating a 4-code 156-Mb/s SAC label. The generated label and payloads are observed by assessing the eye diagram, received optical power (ROP), and optical signal to noise ratio (OSNR).
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
The 1.06 micrometer wideband laser modulator: Fabrication and life testing
NASA Technical Reports Server (NTRS)
Teague, J. R.
1975-01-01
The design, fabrication, testing and delivery of an optical modulator which will operate with a mode-locked Nd:YAG laser at 1.06 micrometers were performed. The system transfers data at a nominal rate of 400 Mbps. This wideband laser modulator can transmit either Pulse Gated Binary Modulation (PGBM) or Pulse Polarization Binary Modulation (PPBM) formats. The laser beam enters the modulator and passes through both crystals; approximately 1% of the transmitted beam is split from the main beam and analyzed for the AEC signal; the remaining part of the beam exits the modulator. The delivered modulator when initially aligned and integrated with laser and electronics performed very well. The optical transmission was 69.5%. The static extinction ratio was 69:1. A 1000 hour life test was conducted with the delivered modulator. A 63 bit pseudorandom code signal was used as a driver input. At the conclusion of the life test the modulator optical transmission was 71.5% and the static extinction ratio 65:1.
Space Qualified High Speed Reed Solomon Encoder
NASA Technical Reports Server (NTRS)
Gambles, Jody W.; Winkert, Tom
1993-01-01
This paper reports a Class S CCSDS recommendation Reed Solomon encoder circuit baselined for several NASA programs. The chip is fabricated using United Technologies Microelectronics Center's UTE-R radiation-hardened gate array family, contains 64,000 p-n transistor pairs, and operates at a sustained output data rate of 200 MBits/s. The chip features a pin selectable message interleave depth of from 1 to 8 and supports output block lengths of 33 to 255 bytes. The UTE-R process is reported to produce parts that are radiation hardened to 16 Rads (Si) total dose and 1.0(exp -10) errors/bit-day.
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
Progress on China nuclear data processing code system
NASA Astrophysics Data System (ADS)
Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu
2017-09-01
China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.
Context dependent prediction and category encoding for DPCM image compression
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.
1989-01-01
Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.
VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems
NASA Technical Reports Server (NTRS)
Gee, T. H.; Geist, J. M.
1973-01-01
Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.
NASA Technical Reports Server (NTRS)
Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.
2016-01-01
In a communications channel, the space environment between a spacecraft and an Earth ground station can potentially cause the loss of a data link or at least degrade its performance due to atmospheric effects, shadowing, multipath, or other impairments. In adaptive and coded modulation, the signal power level at the receiver can be used in order to choose a modulation-coding technique that maximizes throughput while meeting bit error rate (BER) and other performance requirements. It is the goal of this research to implement a generalized interacting multiple model (IMM) filter based on Kalman filters for improved received power estimation on software-dened radio (SDR) technology for satellite communications applications. The IMM filter has been implemented in Verilog consisting of a customizable bank of Kalman filters for choosing between performance and resource utilization. Each Kalman filter can be implemented using either solely a Schur complement module (for high area efficiency) or with Schur complement, matrix multiplication, and matrix addition modules (for high performance). These modules were simulated and synthesized for the Virtex II platform on the JPL Radio Experimenter Development System (EDS) at NASA Glenn Research Center. The results for simulation, synthesis, and hardware testing are presented.
NASA Astrophysics Data System (ADS)
Qiu, Junchao; Zhang, Lin; Li, Diyang; Liu, Xingcheng
2016-06-01
Chaotic sequences can be applied to realize multiple user access and improve the system security for a visible light communication (VLC) system. However, since the map patterns of chaotic sequences are usually well known, eavesdroppers can possibly derive the key parameters of chaotic sequences and subsequently retrieve the information. We design an advanced encryption standard (AES) interleaving aided multiple user access scheme to enhance the security of a chaotic code division multiple access-based visible light communication (C-CDMA-VLC) system. We propose to spread the information with chaotic sequences, and then the spread information is interleaved by an AES algorithm and transmitted over VLC channels. Since the computation complexity of performing inverse operations to deinterleave the information is high, the eavesdroppers in a high speed VLC system cannot retrieve the information in real time; thus, the system security will be enhanced. Moreover, we build a mathematical model for the AES-aided VLC system and derive the theoretical information leakage to analyze the system security. The simulations are performed over VLC channels, and the results demonstrate the effectiveness and high security of our presented AES interleaving aided chaotic CDMA-VLC system.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
NASA Astrophysics Data System (ADS)
Belyayev, Serhiy; Ivchenko, Nickolay
2018-04-01
Digital fluxgate magnetometers employ processing of the measured pickup signal to produce the value of the compensation current. Using pulse-width modulation with filtering for digital to analog conversion is a convenient approach, but it can introduce an intrinsic source of nonlinearity, which we discuss in this design note. A code shift of one least significant bit changes the second harmonic content of the pulse train, which feeds into the pick-up signal chain despite the heavy filtering. This effect produces a code-dependent nonlinearity. This nonlinearity can be overcome by the specific design of the timing of the pulse train signal. The second harmonic is suppressed if the first and third quarters of the excitation period pulse train are repeated in the second and fourth quarters. We demonstrate this principle on a digital magnetometer, achieving a magnetometer noise level corresponding to that of the sensor itself.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
NASA Astrophysics Data System (ADS)
Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard
2017-07-01
Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
Quadrature-quadrature phase-shift keying
NASA Astrophysics Data System (ADS)
Saha, Debabrata; Birdsall, Theodore G.
1989-05-01
Quadrature-quadrature phase-shift keying (Q2PSK) is a spectrally efficient modulation scheme which utilizes available signal space dimensions in a more efficient way than two-dimensional schemes such as QPSK and MSK (minimum-shift keying). It uses two data shaping pulses and two carriers, which are pairwise quadrature in phase, to create a four-dimensional signal space and increases the transmission rate by a factor of two over QPSK and MSK. However, the bit error rate performance depends on the choice of pulse pair. With simple sinusoidal and cosinusoidal data pulses, the Eb/N0 requirement for Pb(E) = 10 to the -5 is approximately 1.6 dB higher than that of MSK. Without additional constraints, Q2PSK does not maintain constant envelope. However, a simple block coding provides a constant envelope. This coded signal substantially outperforms MSKS and TFM (time-frequency multiplexing) in bandwidth efficiency. Like MSK, Q2PSK also has self-clocking and self-synchronizing ability. An optimum class of pulse shapes for use in Q2PSK-format is presented. One suboptimum realization achieves the Nyquist rate of 2 bits/s/Hz using binary detection.
BIT BY BIT: A Game Simulating Natural Language Processing in Computers
ERIC Educational Resources Information Center
Kato, Taichi; Arakawa, Chuichi
2008-01-01
BIT BY BIT is an encryption game that is designed to improve students' understanding of natural language processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
NASA Astrophysics Data System (ADS)
Heller, René
2018-03-01
The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
A Gigabit-per-Second Ka-Band Demonstration Using a Reconfigurable FPGA Modulator
NASA Technical Reports Server (NTRS)
Lee, Dennis; Gray, Andrew A.; Kang, Edward C.; Tsou, Haiping; Lay, Norman E.; Fong, Wai; Fisher, Dave; Hoy, Scott
2005-01-01
Gigabit-per-second communications have been a desired target for future NASA Earth science missions, and for potential manned lunar missions. Frequency bandwidth at S-band and X-band is typically insufficient to support missions at these high data rates. In this paper, we present the results of a 1 Gbps 32-QAM end-to-end experiment at Ka-band using a reconfigurable Field Programmable Gate Array (FPGA) baseband modulator board. Bit error rate measurements of the received signal using a software receiver demonstrate the feasibility of using ultra-high data rates at Ka-band, although results indicate that error correcting coding and/or modulator predistortion must be implemented in addition. Also, results of the demonstration validate the low-cost, MOS-based reconfigurable modulator approach taken to development of a high rate modulator, as opposed to more expensive ASIC or pure analog approaches.
Advanced imaging communication system
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1977-01-01
Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.
Modulation and synchronization technique for MF-TDMA system
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Sayegh, Soheil
1994-01-01
This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Adaptive bit plane quadtree-based block truncation coding for image compression
NASA Astrophysics Data System (ADS)
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Hybrid WDM/OCDMA for next generation access network
NASA Astrophysics Data System (ADS)
Wang, Xu; Wada, Naoya; Miyazaki, T.; Cincotti, G.; Kitayama, Ken-ichi
2007-11-01
Hybrid wavelength division multiplexing/optical code division multiple access (WDM/OCDMA) passive optical network (PON), where asynchronous OCDMA traffic transmits over WDM network, can be one potential candidate for gigabit-symmetric fiber-to-the-home (FTTH) services. In a cost-effective WDM/OCDMA network, a large scale multi-port encoder/decoder can be employed in the central office, and a low cost encoder/decoder will be used in optical network unit (ONU). The WDM/OCDMA system could be one promising solution to the symmetric high capacity access network with high spectral efficiency, cost effective, good flexibility and enhanced security. Asynchronous WDM/OCDMA systems have been experimentally demonstrated using superstructured fiber Bragg gratings (SSFBG) and muti-port OCDMA en/decoders. The total throughput has reached above Tera-bit/s with spectral efficiency of about 0.41. The key enabling techniques include ultra-long SSFBG, multi-port E/D with high power contrast ratio, optical thresholding, differential phase shift keying modulation with balanced detection, forward error correction, and etc. Using multi-level modulation formats to carry multi-bit information with single pulse, the total capacity and spectral efficiency could be further enhanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1982-01-01
An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.
NASA Astrophysics Data System (ADS)
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
Combined group ECC protection and subgroup parity protection
Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin
2013-06-18
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Optical transmission modules for multi-channel superconducting quantum interference device readouts.
Kim, Jin-Mok; Kwon, Hyukchan; Yu, Kwon-kyu; Lee, Yong-Ho; Kim, Kiwoong
2013-12-01
We developed an optical transmission module consisting of 16-channel analog-to-digital converter (ADC), digital-noise filter, and one-line serial transmitter, which transferred Superconducting Quantum Interference Device (SQUID) readout data to a computer by a single optical cable. A 16-channel ADC sent out SQUID readouts data with 32-bit serial data of 8-bit channel and 24-bit voltage data at a sample rate of 1.5 kSample/s. A digital-noise filter suppressed digital noises generated by digital clocks to obtain SQUID modulation as large as possible. One-line serial transmitter reformed 32-bit serial data to the modulated data that contained data and clock, and sent them through a single optical cable. When the optical transmission modules were applied to 152-channel SQUID magnetoencephalography system, this system maintained a field noise level of 3 fT/√Hz @ 100 Hz.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Raptis, Nikos; Pikasis, Evangelos; Syvridis, Dimitris
2016-08-01
The exploitation of optical wireless communication channels in a non-line-of-sight regime is studied for point-to-point and networking configurations considering the use of light-emitting diodes. Two environments with different scattering center densities are considered, assuming operation at 265 nm. The bit error rate performance of both pulsed and multicarrier modulation schemes is examined, using numerical approaches. In the networking scenario, a central node only receives data, one node transmits useful data, and the rest of them act as interferers. The performance of the desirable node's transmissions is evaluated. The access to the medium is controlled by a code division multiple access scheme.
Separable concatenated codes with iterative map decoding for Rician fading channels
NASA Technical Reports Server (NTRS)
Lodge, J. H.; Young, R. J.
1993-01-01
Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
Combined group ECC protection and subgroup parity protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Cheng, Dong; Heidelberger, Philip
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Error Control Techniques for Satellite and Space Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1996-01-01
In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
The transmission of low frequency medical data using delta modulation techniques.
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Dawson, C. T.
1972-01-01
The transmission of low-frequency medical data using delta modulation techniques is described. The delta modulators are used to distribute the low-frequency data into the passband of the telephone lines. Both adaptive and linear delta modulators are considered. Optimum bit rates to minimize distortion and intersymbol interference are discussed. Vibrocardiographic waves are analyzed as a function of bit rate and delta modulator configuration to determine their reproducibility for medical evaluation.
Digital TV tri-state delta modulation system for Space Shuttle ku-band downlink
NASA Technical Reports Server (NTRS)
Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.
1982-01-01
A tri-state delta modulation/demodulation (TSDM) technique which provides for efficient run-length coding of constant-intensity segments of a TV picture is described. Aspects of the hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV or field-sequential color or NTSC format color are reviewed. Run-length encoding of the TSDM output can consistently reduce the required channel data rate well below one bit per sample. As compared with a bistate delta modulation system, the present technique eliminates granularity in the reconstructed video without degrading rise or fall times. About 40 chips are used by TSDM when used to handle the luminance information in a color link. A possible overall space and ground functional configuration to accommodate Shuttle digital TV with scrambling for privacy is presented.
The storage capacity of Potts models for semantic memory retrieval
NASA Astrophysics Data System (ADS)
Kropff, Emilio; Treves, Alessandro
2005-08-01
We introduce and analyse a minimal network model of semantic memory in the human brain. The model is a global associative memory structured as a collection of N local modules, each coding a feature, which can take S possible values, with a global sparseness a (the average fraction of features describing a concept). We show that, under optimal conditions, the number cM of modules connected on average to a module can range widely between very sparse connectivity (high dilution, c_{M}/N\\to 0 ) and full connectivity (c_{M}\\to N ), maintaining a global network storage capacity (the maximum number pc of stored and retrievable concepts) that scales like pc~cMS2/a, with logarithmic corrections consistent with the constraint that each synapse may store up to a fraction of a bit.
DOC II 32-bit digital optical computer: optoelectronic hardware and software
NASA Astrophysics Data System (ADS)
Stone, Richard V.; Zeise, Frederick F.; Guilfoyle, Peter S.
1991-12-01
This paper describes current electronic hardware subsystems and software code which support OptiComp's 32-bit general purpose digital optical computer (DOC II). The reader is referred to earlier papers presented in this section for a thorough discussion of theory and application regarding DOC II. The primary optoelectronic subsystems include the drive electronics for the multichannel acousto-optic modulators, the avalanche photodiode amplifier, as well as threshold circuitry, and the memory subsystems. This device utilizes a single optical Boolean vector matrix multiplier and its VME based host controller interface in performing various higher level primitives. OptiComp Corporation wishes to acknowledge the financial support of the Office of Naval Research, the National Aeronautics and Space Administration, the Rome Air Development Center, and the Strategic Defense Initiative Office for the funding of this program under contracts N00014-87-C-0077, N00014-89-C-0266 and N00014-89-C- 0225.
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
Honkela, Antti; Valpola, Harri
2004-07-01
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
NASA Astrophysics Data System (ADS)
Inoshita, Kensuke; Hama, Yoshimitsu; Kishikawa, Hiroki; Goto, Nobuo
2016-12-01
In photonic label routers, various optical signal processing functions are required; these include optical label extraction, recognition of the label, optical switching and buffering controlled by signals based on the label information and network routing tables, and label rewriting. Among these functions, we focus on photonic label recognition. We have proposed two kinds of optical waveguide circuits to recognize 16 quadrature amplitude modulation codes, i.e., recognition from the minimum output port and from the maximum output port. The recognition function was theoretically analyzed and numerically simulated by finite-difference beam-propagation method. We discuss noise tolerance in the circuit and show numerically simulated results to evaluate bit-error-rate (BER) characteristics against optical signal-to-noise ratio (OSNR). The OSNR required to obtain a BER less than 1.0×10-3 for the symbol rate of 2.5 GBaud was 14.5 and 27.0 dB for recognition from the minimum and maximum output, respectively.
Demonstration of a High-Efficiency Free-Space Optical Communications Link
NASA Technical Reports Server (NTRS)
Birnbaum, Kevin; Farr, William; Gin, Jonathan; Moision, Bruce; Quirk, Kevin; Wright, Malcolm
2009-01-01
In this paper we discuss recent progress on the implementation of a hardware free-space optical communications test-bed. The test-bed implements an end-to-end communications system comprising a data encoder, modulator, laser-transmitter, telescope, detector, receiver and error-correction-code decoder. Implementation of each of the component systems is discussed, with an emphasis on 'real-world' system performance degradation and limitations. We have demonstrated real-time data rates of 44 Mbps and photon efficiencies of approximately 1.8 bits/photon over a 100m free-space optical link.
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
LDPC product coding scheme with extrinsic information for bit patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Single-pixel imaging based on compressive sensing with spectral-domain optical mixing
NASA Astrophysics Data System (ADS)
Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin
2017-11-01
In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.
OptoRadio: a method of wireless communication using orthogonal M-ary PSK (OMPSK) modulation
NASA Astrophysics Data System (ADS)
Gaire, Sunil Kumar; Faruque, Saleh; Ahamed, Md. Maruf
2016-09-01
Laser based radio communication system, i.e. OptoRadio, using Orthogonal M-ary PSK Modulation scheme is presented in this paper. In this scheme, when a block of data needs to be transmitted, the corresponding block of biorthogonal code is transmitted by means of multi-phase shift keying. At the receiver, two photo diodes are cross coupled. The effect is that the net output power due to ambient light is close to zero. The laser signal is then transmitted only into one of the receivers. With all other signals being cancelled out, the laser signal is an overwhelmingly dominant signal. The detailed design, bit error correction capabilities, and bandwidth efficiency are presented to illustrate the concept.
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Present state of HDTV coding in Japan and future prospect
NASA Astrophysics Data System (ADS)
Murakami, Hitomi
The development status of HDTV digital codecs in Japan is evaluated; several bit rate-reduction codecs have been developed for 1125 lines/60-field HDTV, and performance trials have been conducted through satellite and optical fiber links. Prospective development efforts will attempt to achieve more efficient coding schemes able to reduce the bit rate to as little as 45 Mbps, as well as to apply coding schemes to automated teller machine networks.
Impacts of selected stimulation patterns on the perception threshold in electrocutaneous stimulation
2011-01-01
Background Consistency is one of the most important concerns to convey stable artificially induced sensory feedback. However, the constancy of perceived sensations cannot be guaranteed, as the artificially evoked sensation is a function of the interaction of stimulation parameters. The hypothesis of this study is that the selected stimulation parameters in multi-electrode cutaneous stimulation have significant impacts on the perception threshold. Methods The investigated parameters included the stimulated location, the number of active electrodes, the number of pulses, and the interleaved time between a pair of electrodes. Biphasic, rectangular pulses were applied via five surface electrodes placed on the forearm of 12 healthy subjects. Results Our main findings were: 1) the perception thresholds at the five stimulated locations were significantly different (p < 0.0001), 2) dual-channel simultaneous stimulation lowered the perception thresholds and led to smaller variance in perception thresholds compared to single-channel stimulation, 3) the perception threshold was inversely related to the number of pulses, and 4) the perception threshold increased with increasing interleaved time when the interleaved time between two electrodes was below 500 μs. Conclusions To maintain a consistent perception threshold, our findings indicate that dual-channel simultaneous stimulation with at least five pulses should be used, and that the interleaved time between two electrodes should be longer than 500 μs. We believe that these findings have implications for design of reliable sensory feedback codes. PMID:21306616
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
Practical Formal Verification of MPI and Thread Programs
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Ganesh; Kirby, Robert M.
Large-scale simulation codes in science and engineering are written using the Message Passing Interface (MPI). Shared memory threads are widely used directly, or to implement higher level programming abstractions. Traditional debugging methods for MPI or thread programs are incapable of providing useful formal guarantees about coverage. They get bogged down in the sheer number of interleavings (schedules), often missing shallow bugs. In this tutorial we will introduce two practical formal verification tools: ISP (for MPI C programs) and Inspect (for Pthread C programs). Unlike other formal verification tools, ISP and Inspect run directly on user source codes (much like a debugger). They pursue only the relevant set of process interleavings, using our own customized Dynamic Partial Order Reduction algorithms. For a given test harness, DPOR allows these tools to guarantee the absence of deadlocks, instrumented MPI object leaks and communication races (using ISP), and shared memory races (using Inspect). ISP and Inspect have been used to verify large pieces of code: in excess of 10,000 lines of MPI/C for ISP in under 5 seconds, and about 5,000 lines of Pthread/C code in a few hours (and much faster with the use of a cluster or by exploiting special cases such as symmetry) for Inspect. We will also demonstrate the Microsoft Visual Studio and Eclipse Parallel Tools Platform integrations of ISP (these will be available on the LiveCD).
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Combined coding and delay-throughput analysis for fading channels of mobile satellite communications
NASA Technical Reports Server (NTRS)
Wang, C. C.; Yan, Tsun-Yee
1986-01-01
This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Non-US data compression and coding research. FASAC Technical Assessment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R.M.; Cohn, M.; Craver, L.W.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Theoretical and subjective bit assignments in transform picture
NASA Technical Reports Server (NTRS)
Jones, H. W., Jr.
1977-01-01
It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.
Bit-rate transparent DPSK demodulation scheme based on injection locking FP-LD
NASA Astrophysics Data System (ADS)
Feng, Hanlin; Xiao, Shilin; Yi, Lilin; Zhou, Zhao; Yang, Pei; Shi, Jie
2013-05-01
We propose and demonstrate a bit-rate transparent differential phase shift-keying (DPSK) demodulation scheme based on injection locking multiple-quantum-well (MQW) strained InGaAsP FP-LD. By utilizing frequency deviation generated by phase modulation and unstable injection locking state with Fabry-Perot laser diode (FP-LD), DPSK to polarization shift-keying (PolSK) and PolSK to intensity modulation (IM) format conversions are realized. We analyze bit error rate (BER) performance of this demodulation scheme. Experimental results show that different longitude modes, bit rates and seeding power have influences on demodulation performance. We achieve error free DPSK signal demodulation under various bit rates of 10 Gbit/s, 5 Gbit/s, 2.5 Gbit/s and 1.25 Gbit/s with the same demodulation setting.
1990-01-01
generator , and a set of three rules that insure the mimmum distance property. The coding scheme and the interleaving device will be described. The BER...point Padd method which matches the c.d f around zero and infinity. The advantage of this technique is that only low order moments are required. This is...accomplished by means of the intrinsic side information generated by an appropriate coding scheme . In this paper, we give sufficient conditions on channel
Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W
2016-11-14
Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.
Nanosatellite optical downlink experiment: design, simulation, and prototyping
NASA Astrophysics Data System (ADS)
Clements, Emily; Aniceto, Raichelle; Barnes, Derek; Caplan, David; Clark, James; Portillo, Iñigo del; Haughwout, Christian; Khatsenko, Maxim; Kingsbury, Ryan; Lee, Myron; Morgan, Rachel; Twichell, Jonathan; Riesing, Kathleen; Yoon, Hyosang; Ziegler, Caleb; Cahoy, Kerri
2016-11-01
The nanosatellite optical downlink experiment (NODE) implements a free-space optical communications (lasercom) capability on a CubeSat platform that can support low earth orbit (LEO) to ground downlink rates>10 Mbps. A primary goal of NODE is to leverage commercially available technologies to provide a scalable and cost-effective alternative to radio-frequency-based communications. The NODE transmitter uses a 200-mW 1550-nm master-oscillator power-amplifier design using power-efficient M-ary pulse position modulation. To facilitate pointing the 0.12-deg downlink beam, NODE augments spacecraft body pointing with a microelectromechanical fast steering mirror (FSM) and uses an 850-nm uplink beacon to an onboard CCD camera. The 30-cm aperture ground telescope uses an infrared camera and FSM for tracking to an avalanche photodiode detector-based receiver. Here, we describe our approach to transition prototype transmitter and receiver designs to a full end-to-end CubeSat-scale system. This includes link budget refinement, drive electronics miniaturization, packaging reduction, improvements to pointing and attitude estimation, implementation of modulation, coding, and interleaving, and ground station receiver design. We capture trades and technology development needs and outline plans for integrated system ground testing.
A New Clinical HIFU System (Teleson II)
NASA Astrophysics Data System (ADS)
Ma, Yixin; Symonds-Tayler, Richard; Rivens, Ian H.; ter Haar, Gail R.
2007-05-01
Previous clinical trials with our first prototype HIFU system (Teleson I) for the treatment of liver tumors, demonstrated a major challenge to be treatment of those tumors located behind the ribs. We have designed a new multi-element transducer for rib sparing. Initial simulation and experimental results (using a single channel power amplifier) are very encouraging. A new clinical HIFU system which can drive the multi-element transducer and control each channel independently is being designed and constructed. This second version of a clinical prototype HIFU system consists of a 3D motorised gantry, a multi-channel signal generator, a multi-channel power amplifier, a user interface PC, an embedded controller and auxiliary circuits for real-time interleaving/synchronization control and a to-be-implemented safety monitoring and data logging unit. For multi-element transducers, each element can be individually switched on and off for rib sparing, and phase and amplitude modulated for potential phased array applications. The multi-channel power amplifier can be switched on/off very rapidly at required intervals to interleave with ultrasound B-Scan imaging for HIFU monitoring or radiation force elastography imaging via a dedicated interleaving/timing module. The gantry movement can also be synchronised with power amplifier on/off and phase/amplitude updating for lesion generation under a wide variety of conditions including single lesions, lesion arrays and lesions "tracks" created whilst translating the active transducer. Results from testing the system using excised tissue will be presented.
Continuous-variable quantum network coding for coherent states
NASA Astrophysics Data System (ADS)
Shang, Tao; Li, Ke; Liu, Jian-wei
2017-04-01
As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.
Full Duplex, Spread Spectrum Radio System
NASA Technical Reports Server (NTRS)
Harvey, Bruce A.
2000-01-01
The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.
10Gbps 2D MGC OCDMA Code over FSO Communication System
NASA Astrophysics Data System (ADS)
Professor Urmila Bhanja, Associate, Dr.; Khuntia, Arpita; Alamasety Swati, (Student
2017-08-01
Currently, wide bandwidth signal dissemination along with low latency is a leading requisite in various applications. Free space optical wireless communication has introduced as a realistic technology for bridging the gap in present high data transmission fiber connectivity and as a provisional backbone for rapidly deployable wireless communication infrastructure. The manuscript highlights on the implementation of 10Gbps SAC-OCDMA FSO communications using modified two dimensional Golomb code (2D MGC) that possesses better auto correlation, minimum cross correlation and high cardinality. A comparison based on pseudo orthogonal (PSO) matrix code and modified two dimensional Golomb code (2D MGC) is developed in the proposed SAC OCDMA-FSO communication module taking different parameters into account. The simulative outcome signifies that the communication radius is bounded by the multiple access interference (MAI). In this work, a comparison is made in terms of bit error rate (BER), and quality factor (Q) based on modified two dimensional Golomb code (2D MGC) and PSO matrix code. It is observed that the 2D MGC yields better results compared to the PSO matrix code. The simulation results are validated using optisystem version 14.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Pseudo-color coding method for high-dynamic single-polarization SAR images
NASA Astrophysics Data System (ADS)
Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi
2018-04-01
A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.
Miniaturized module for the wireless transmission of measurements with Bluetooth.
Roth, H; Schwaibold, M; Moor, C; Schöchlin, J; Bolz, A
2002-01-01
The wiring of patients for obtaining medical measurements has many disadvantages. In order to limit these, a miniaturized module was developed which digitalizes analog signals and sends the signal wirelessly to the receiver using Bluetooth. Bluetooth is especially suitable for this application because distances of up to 10 m are possible with low power consumption and robust transmission with encryption. The module consists of a Bluetooth chip, which is initialized in such a way by a microcontroller that connections from other bluetooth receivers can be accepted. The signals are then transmitted to the distant end. The maximum bit rate of the 23 mm x 30 mm module is 73.5 kBit/s. At 4.7 kBit/s, the current consumption is 12 mA.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
NASA Astrophysics Data System (ADS)
Ozharar, Sarper
This thesis focuses on the generation and applications of stable optical frequency combs. Optical frequency combs are defined as equally spaced optical frequencies with a fixed phase relation among themselves. The conventional source of optical frequency combs is the optical spectrum of the modelocked lasers. In this work, we investigated alternative methods for optical comb generation, such as dual sine wave phase modulation, which is more practical and cost effective compared to modelocked lasers stabilized to a reference. Incorporating these comblines, we have generated tunable RF tones using the serrodyne technique. The tuning range was +/-1 MHz, limited by the electronic waveform generator, and the RF carrier frequency is limited by the bandwidth of the photodetector. Similarly, using parabolic phase modulation together with time division multiplexing, RF chirp extension has been realized. Another application of the optical frequency combs studied in this thesis is real time data mining in a bit stream. A novel optoelectronic logic gate has been developed for this application and used to detect an 8 bit long target pattern. Also another approach based on orthogonal Hadamard codes have been proposed and explained in detail. Also novel intracavity modulation schemes have been investigated and applied for various applications such as (a) improving rational harmonic modelocking for repetition rate multiplication and pulse to pulse amplitude equalization, (b) frequency skewed pulse generation for ranging and (c) intracavity active phase modulation in amplitude modulated modelocked lasers for supermode noise spur suppression and integrated jitter reduction. The thesis concludes with comments on the future work and next steps to improve some of the results presented in this work.
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2009-01-01
For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994
Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk
2009-09-01
For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.
Technology transfer of military space microprocessor developments
NASA Astrophysics Data System (ADS)
Gorden, C.; King, D.; Byington, L.; Lanza, D.
1999-01-01
Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.
Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David
2012-01-01
The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.
2016-12-01
The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.
Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, A L
2010-12-15
As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum valuemore » available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be replaced with the 64-bit integer. that need to be replaced. In the rest of this paper we describe the techniques used to facilitate this transformation, issues raised, and issues still to be addressed. This poster will describe the reasons, methods, issues associated with extending the ALE3D code to run problems larger than 700 million elements.« less
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
1975-02-01
UNCLASSIFIED AD NUMBER LIMITATION CHANGES TO: FROM: AUTHORITY THIS PAGE IS UNCLASSIFIED ADB013811 Approved for public release; distribution is...of Changing Sampling Frequency and Bits/Sample 13 Image Coding Methods 63 Basic Dual-Mode Coder Code Assignment 73 Oversampled Dual...results from the threshold at which a 1 bit will oe trans- mitted. The threshold corresponds to a finite change on the gray scale or resolution of the
VLSI design of an RSA encryption/decryption chip using systolic array based architecture
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi
2016-09-01
This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.
Preserving privacy of online digital physiological signals using blind and reversible steganography.
Shiu, Hung-Jr; Lin, Bor-Sing; Huang, Chien-Hung; Chiang, Pei-Ying; Lei, Chin-Laung
2017-11-01
Physiological signals such as electrocardiograms (ECG) and electromyograms (EMG) are widely used to diagnose diseases. Presently, the Internet offers numerous cloud storage services which enable digital physiological signals to be uploaded for convenient access and use. Numerous online databases of medical signals have been built. The data in them must be processed in a manner that preserves patients' confidentiality. A reversible error-correcting-coding strategy will be adopted to transform digital physiological signals into a new bit-stream that uses a matrix in which is embedded the Hamming code to pass secret messages or private information. The shared keys are the matrix and the version of the Hamming code. An online open database, the MIT-BIH arrhythmia database, was used to test the proposed algorithms. The time-complexity, capacity and robustness are evaluated. Comparisons of several evaluations subject to related work are also proposed. This work proposes a reversible, low-payload steganographic scheme for preserving the privacy of physiological signals. An (n, m)-hamming code is used to insert (n - m) secret bits into n bits of a cover signal. The number of embedded bits per modification is higher than in comparable methods, and the computational power is efficient and the scheme is secure. Unlike other Hamming-code based schemes, the proposed scheme is both reversible and blind. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.
2014-01-01
NASA Langley Research Center in collaboration with ITT Exelis have been experimenting with Continuous Wave (CW) laser absorption spectrometer (LAS) as a means of performing atmospheric CO2 column measurements from space to support the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission.Because range resolving Intensity Modulated (IM) CW lidar techniques presented here rely on matched filter correlations, autocorrelation properties without side lobes or other artifacts are highly desirable since the autocorrelation function is critical for the measurements of lidar return powers, laser path lengths, and CO2 column amounts. In this paper modulation techniques are investigated that improve autocorrelation properties. The modulation techniques investigated in this paper include sine waves modulated by maximum length (ML) sequences in various hardware configurations. A CW lidar system using sine waves modulated by ML pseudo random noise codes is described, which uses a time shifting approach to separate channels and make multiple, simultaneous online/offline differential absorption measurements. Unlike the pure ML sequence, this technique is useful in hardware that is band pass filtered as the IM sine wave carrier shifts the main power band. Both amplitude and Phase Shift Keying (PSK) modulated IM carriers are investigated that exibit perfect autocorrelation properties down to one cycle per code bit. In addition, a method is presented to bandwidth limit the ML sequence based on a Gaussian filter implemented in terms of Jacobi theta functions that does not seriously degrade the resolution or introduce side lobes as a means of reducing aliasing and IM carrier bandwidth.
LED-based high-speed visible light communications
NASA Astrophysics Data System (ADS)
Chi, Nan; Shi, Meng; Zhao, Yiheng; Wang, Fumin; Shi, Jianyang; Zhou, Yingjun; Lu, Xingyu; Qiao, Liang
2018-01-01
We are seeing a growing use of light emitting diodes (LEDs) in a range of applications including lighting, TV and backlight board screen, display etc. In comparison with the traditional incandescent and fluorescent light bulbs, LEDs offer long life-space, much higher energy efficiency, high performance cost ratio and above all very fast switching capability. LED based Visible Light Communications (VLC) is an emerging field of optical communications that focuses on the part of the electromagnetic spectrum that humans can see. Depending on the transmission distance, we can divide the whole optical network into two categories, long haul and short haul. Visible light communication can be a promising candidate for short haul applications. In this paper, we outline the configuration of VLC, its unique benefits, and describe the state of the art research contributions consisting of advanced modulation formats including adaptive bit loading OFDM, carrierless amplitude and phase (CAP), pulse amplitude modulation (PAM) and single carrier Nyquist, linear equalization and nonlinear distortion mitigation based on machine learning, quasi-balanced coding and phase-shifted Manchester coding. These enabling technologies can support VLC up to 10Gb/s class free space transmission.
CoNNeCT Baseband Processor Module Boot Code SoftWare (BCSW)
NASA Technical Reports Server (NTRS)
Yamamoto, Clifford K.; Orozco, David S.; Byrne, D. J.; Allen, Steven J.; Sahasrabudhe, Adit; Lang, Minh
2012-01-01
This software provides essential startup and initialization routines for the CoNNeCT baseband processor module (BPM) hardware upon power-up. A command and data handling (C&DH) interface is provided via 1553 and diagnostic serial interfaces to invoke operational, reconfiguration, and test commands within the code. The BCSW has features unique to the hardware it is responsible for managing. In this case, the CoNNeCT BPM is configured with an updated CPU (Atmel AT697 SPARC processor) and a unique set of memory and I/O peripherals that require customized software to operate. These features include configuration of new AT697 registers, interfacing to a new HouseKeeper with a flash controller interface, a new dual Xilinx configuration/scrub interface, and an updated 1553 remote terminal (RT) core. The BCSW is intended to provide a "safe" mode for the BPM when initially powered on or when an unexpected trap occurs, causing the processor to reset. The BCSW allows the 1553 bus controller in the spacecraft or payload controller to operate the BPM over 1553 to upload code; upload Xilinx bit files; perform rudimentary tests; read, write, and copy the non-volatile flash memory; and configure the Xilinx interface. Commands also exist over 1553 to cause the CPU to jump or call a specified address to begin execution of user-supplied code. This may be in the form of a real-time operating system, test routine, or specific application code to run on the BPM.
Direct bit detection receiver noise performance analysis for 32-PSK and 64-PSK modulated signals
NASA Astrophysics Data System (ADS)
Ahmed, Iftikhar
1987-12-01
Simple two channel receivers for 32-PSK and 64-PSK modulated signals have been proposed which allow digital data (namely bits), to be recovered directly instead of the traditional approach of symbol detection followed by symbol to bit mappings. This allows for binary rather than M-ary receiver decisions, reduces the amount of signal processing operations and permits parallel recovery of the bits. The noise performance of these receivers quantified by the Bit Error Rate (BER) assuming an Additive White Gaussian Noise interference model is evaluated as a function of Eb/No, the signal to noise ratio, and transmitted phase angles of the signals. The performance results of the direct bit detection receivers (DBDR) when compared to that of convectional phase measurement receivers demonstrate that DBDR's are optimum in BER sense. The simplicity of the receiver implementations and the BER of the delivered data make DBDR's attractive for high speed, spectrally efficient digital communication systems.
MacWilliams Identity for M-Spotty Weight Enumerator
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Fujiwara, Eiji
M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.
Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Huang, Yong; Tan, Xiaodi
2018-02-19
A novel phase modulation method for holographic data storage with phase-retrieval reference beam locking is proposed and incorporated into an amplitude-encoding collinear holographic storage system. Unlike the conventional phase retrieval method, the proposed method locks the data page and the corresponding phase-retrieval interference beam together at the same location with a sequential recording process, which eliminates piezoelectric elements, phase shift arrays and extra interference beams, making the system more compact and phase retrieval easier. To evaluate our proposed phase modulation method, we recorded and then recovered data pages with multilevel phase modulation using two spatial light modulators experimentally. For 4-level, 8-level, and 16-level phase modulation, we achieved the bit error rate (BER) of 0.3%, 1.5% and 6.6% respectively. To further improve data storage density, an orthogonal reference encoding multiplexing method at the same position of medium is also proposed and validated experimentally. We increased the code rate of pure 3/16 amplitude encoding method from 0.5 up to 1.0 and 1.5 using 4-level and 8-level phase modulation respectively.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
Low-noise delays from dynamic Brillouin gratings based on perfect Golomb coding of pump waves.
Antman, Yair; Levanon, Nadav; Zadok, Avi
2012-12-15
A method for long variable all-optical delay is proposed and simulated, based on reflections from localized and stationary dynamic Brillouin gratings (DBGs). Inspired by radar methods, the DBGs are inscribed by two pumps that are comodulated by perfect Golomb codes, which reduce the off-peak reflectivity. Compared with random bit sequence coding, Golomb codes improve the optical signal-to-noise ratio (OSNR) of delayed waveforms by an order of magnitude. Simulations suggest a delay of 5 Gb/s data by 9 ns, or 45 bit durations, with an OSNR of 13 dB.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
LSB-Based Steganography Using Reflected Gray Code
NASA Astrophysics Data System (ADS)
Chen, Chang-Chu; Chang, Chin-Chen
Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Performance of DBS-Radio using concatenated coding and equalization
NASA Technical Reports Server (NTRS)
Gevargiz, J.; Bell, D.; Truong, L.; Vaisnys, A.; Suwitra, K.; Henson, P.
1995-01-01
The Direct Broadcast Satellite-Radio (DBS-R) receiver is being developed for operation in a multipath Rayleigh channel. This receiver uses equalization and concatenated coding, in addition to open loop and closed loop architectures for carrier demodulation and symbol synchronization. Performance test results of this receiver are presented in both AWGN and multipath Rayleigh channels. Simulation results show that the performance of the receiver operating in a multipath Rayleigh channel is significantly improved by using equalization. These results show that fractional-symbol equalization offers a performance advantage over full symbol equalization. Also presented is the base-line performance of the DBS-R receiver using concatenated coding and interleaving.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1971-01-01
The conclusions of the design research of the song adaptive delta modulator are presented for source encoding voice signals. The variation of output SNR vs input signal power/when 8, 9, and 10 bit internal arithmetic is employed. Voice intelligibility tapes to test the 10-bit system are used. An analysis of a delta modulator is also presented designed to minimize the in-band rms error. This is accomplished by frequency shaping the error signal in the modulator prior to hard limiting. The result is a significant increase in the output SNR measured after low pass filtering.
NASA Astrophysics Data System (ADS)
Bai, Yang; Chen, Shufen; Fu, Li; Fang, Wei; Lu, Junjun
2005-01-01
A high bit rate more than 10Gbit/s optical pulse generation device is the key to achieving high-speed and broadband optical fiber communication network system .Now, we propose a novel high-speed optical transmission module(TM) consisting of a Ti:Er:LiNbO3 waveguide laser and a Mach-Zehnder-type encoding modulator on the same Er-doped substrate. According to the standard of ITU-T, we design the 10Gbit/ s transmission module at 1.53μm on the Z cut Y propagation LiNbO3 slice. A dynamic model and the corresponding numerical code are used to analyze the waveguide laser while the electrooptic effect to design the modulator. Meanwhile, the working principle, key technology, typical characteristic parameters of the module are given. The transmission module has a high extinction ratio and a low driving voltage, which supplies the efficient, miniaturized light source for wavelength division multiplexing(WDM) system. In additional, the relation of the laser gain with the cavity parameter, as well as the relation of the bandwidth of the electrooptic modulator with some key factors are discussed .The designed module structure is simulated by BPM software and HFSS software.
The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results
NASA Technical Reports Server (NTRS)
Katz, J.
1982-01-01
The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.
Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems
NASA Astrophysics Data System (ADS)
Morgado, Alonso; del Río, Rocío; de la Rosa, José M.
2007-05-01
This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.
A novel design of optical CDMA system based on TCM and FFH
NASA Astrophysics Data System (ADS)
Fang, Jun-Bin; Xu, Zhi-Hai; Huang, Hong-bin; Zheng, Liming; Chen, Shun-er; Liu, Wei-ping
2005-02-01
For the application in Passive Optical Network (PON), a novel design of OCDMA system scheme is proposed in this paper. There are two key components included in this scheme: a new kind of OCDMA encoder/decoder system based on TCM and FFH and an improved Optical Line Terminal (OLT) receiving system with improved anti-interference performance by the use of Long Period Fiber Grating (LPFG). In the encoder/decoder system, Trellis Coded Modulation (TCM) encoder is applied in front of the FFH modulator. Original signal firstly is encoded through TCM encoder, and then the redundant code out of the TCM encoder will be mapped into one of the FFH modulation signal subsets for transmission. On the receiver (decoder) side, transmitting signal is demodulated through FFH and decoded by trellis decoder. Owing to the fact that high coding gain can be acquired by TCM without adding transmitting band and reducing transmitting speed, TCM is utilized to ameliorate bit error performance and reduce multi-user interference. In the OLT receiving system, EDFA and LPFG are placed in front of decoder to get excellent gain flatness on a large bandwidth, and Optical Hard Limiter (OHL) is also deployed to improve detection performance, through which the anti-interference performance of receiving system can be greatly enhanced. At the same time, some software is used to simulate the system performance for further analysis and authentication. The related work in this paper provides a valuable reference to the research.
Optical signal processing for a smart vehicle lighting system using a-SiCH technology
NASA Astrophysics Data System (ADS)
Vieira, M. A.; Vieira, M.; Vieira, P.; Louro, P.
2017-05-01
We propose the use of Visible Light Communication (VLC) for vehicle safety applications, creating a smart vehicle lighting system that combines the functions of illumination and signaling, communications, and positioning. The feasibility of VLC is demonstrated by employing trichromatic Red-Green-Blue (RGB) LEDs as transmitters, since they offer the possibility of Wavelength Division Multiplexing (WDM), which can greatly increase the transmission data rate, when using SiC double p-i-n receivers to encode/decode the information. Trichromatic RGB Light Emitting Diodes (LED)s (RGB-LED) are used together for illumination proposes (headlamps) and individually, each chip, to transmit the driving range distance and data information. An on-off code is used to transmit the data. Free space is the transmission medium. The receivers consist of two stacked amorphous a-H:SiC cells. They combine the simultaneous demultiplexing operation with the photodetection and self-amplification. The proposed coding is based on SiC technology. Multiple Input Multi Output (MIMO) architecture is used. For data transmission, we propose the use of two headlights based on commercially available modulated white RGB-LEDs. For data receiving and decoding we use three a-SiC:H double pin/pin optical processors symmetrically distributed at the vehicle tail Moreover, we present a way to achieve vehicular communication using the parity bits. A representation with a 4 bit original string color message and the transmitted 7 bit string, the encoding and decoding accurate positional information processes and the design of SiC navigation system are discussed and tested. A visible multilateration method estimates the drive distance range by using the decoded information received from several non-collinear transmitters.
Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication
NASA Astrophysics Data System (ADS)
Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun
2018-07-01
A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).
Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong
2017-01-01
SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Error-Rate Bounds for Coded PPM on a Poisson Channel
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
NASA Astrophysics Data System (ADS)
Maher, Robert; Alvarado, Alex; Lavery, Domaniç; Bayvel, Polina
2016-02-01
Optical fibre underpins the global communications infrastructure and has experienced an astonishing evolution over the past four decades, with current commercial systems transmitting data rates in excess of 10 Tb/s over a single fibre core. The continuation of this dramatic growth in throughput has become constrained due to a power dependent nonlinear distortion arising from a phenomenon known as the Kerr effect. The mitigation of fibre nonlinearities is an area of intense research. However, even in the absence of nonlinear distortion, the practical limit on the transmission throughput of a single fibre core is dominated by the finite signal-to-noise ratio (SNR) afforded by current state-of-the-art coherent optical transceivers. Therefore, the key to maximising the number of information bits that can be reliably transmitted over a fibre channel hinges on the simultaneous optimisation of the modulation format and code rate, based on the SNR achieved at the receiver. In this work, we use an information theoretic approach based on the mutual information and the generalised mutual information to characterise a state-of-the-art dual polarisation m-ary quadrature amplitude modulation transceiver and subsequently apply this methodology to a 15-carrier super-channel to achieve the highest throughput (1.125 Tb/s) ever recorded using a single coherent receiver.
Maher, Robert; Alvarado, Alex; Lavery, Domaniç; Bayvel, Polina
2016-01-01
Optical fibre underpins the global communications infrastructure and has experienced an astonishing evolution over the past four decades, with current commercial systems transmitting data rates in excess of 10 Tb/s over a single fibre core. The continuation of this dramatic growth in throughput has become constrained due to a power dependent nonlinear distortion arising from a phenomenon known as the Kerr effect. The mitigation of fibre nonlinearities is an area of intense research. However, even in the absence of nonlinear distortion, the practical limit on the transmission throughput of a single fibre core is dominated by the finite signal-to-noise ratio (SNR) afforded by current state-of-the-art coherent optical transceivers. Therefore, the key to maximising the number of information bits that can be reliably transmitted over a fibre channel hinges on the simultaneous optimisation of the modulation format and code rate, based on the SNR achieved at the receiver. In this work, we use an information theoretic approach based on the mutual information and the generalised mutual information to characterise a state-of-the-art dual polarisation m-ary quadrature amplitude modulation transceiver and subsequently apply this methodology to a 15-carrier super-channel to achieve the highest throughput (1.125 Tb/s) ever recorded using a single coherent receiver. PMID:26864633
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
NASA Technical Reports Server (NTRS)
Brand, J.
1972-01-01
The fabrication, test, and delivery of an optical modulator system which will operate with a mode-locked Nd:YAG laser indicating at either 1.06 or 0.53 micrometers is discussed. The delivered hardware operates at data rates up to 400 Mbps and includes a 0.53 micrometer electrooptic modulator, a 1.06 micrometer electrooptic modulator with power supply and signal processing electronics with power supply. The modulators contain solid state drivers which accept digital signals with MECL logic levels, temperature controllers to maintain a stable thermal environment for the modulator crystals, and automatic electronic compensation to maximize the extinction ratio. The modulators use two lithium tantalate crystals cascaded in a double pass configuration. The signal processing electronics include encoding electronics which are capable of digitizing analog signals between the limit of + or - 0.75 volts at a maximum rate of 80 megasamples per second with 5 bit resolution. The digital samples are serialized and made available as a 400 Mbps serial NRZ data source for the modulators. A pseudorandom (PN) generator is also included in the signal processing electronics. This data source generates PN sequences with lengths between 31 bits and 32,767 bits in a serial NRZ format at rates up to 400 Mbps.
Chen, Ming; He, Jing; Tang, Jin; Wu, Xian; Chen, Lin
2014-07-28
In this paper, a FPGAs-based real-time adaptively modulated 256/64/16QAM-encoded base-band OFDM transceiver with a high spectral efficiency up to 5.76bit/s/Hz is successfully developed, and experimentally demonstrated in a simple intensity-modulated direct-detection optical communication system. Experimental results show that it is feasible to transmit a raw signal bit rate of 7.19Gbps adaptively modulated real-time optical OFDM signal over 20km and 50km single mode fibers (SMFs). The performance comparison between real-time and off-line digital signal processing is performed, and the results show that there is a negligible power penalty. In addition, to obtain the best transmission performance, direct-current (DC) bias voltage for MZM and launch power into optical fiber links are explored in the real-time optical OFDM systems.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
Towards a heralded eigenstate-preserving measurement of multi-qubit parity in circuit QED
NASA Astrophysics Data System (ADS)
Huembeli, Patrick; Nigg, Simon E.
2017-07-01
Eigenstate-preserving multi-qubit parity measurements lie at the heart of stabilizer quantum error correction, which is a promising approach to mitigate the problem of decoherence in quantum computers. In this work we explore a high-fidelity, eigenstate-preserving parity readout for superconducting qubits dispersively coupled to a microwave resonator, where the parity bit is encoded in the amplitude of a coherent state of the resonator. Detecting photons emitted by the resonator via a current biased Josephson junction yields information about the parity bit. We analyze theoretically the measurement back action in the limit of a strongly coupled fast detector and show that in general such a parity measurement, while approximately quantum nondemolition is not eigenstate preserving. To remediate this shortcoming we propose a simple dynamical decoupling technique during photon detection, which greatly reduces decoherence within a given parity subspace. Furthermore, by applying a sequence of fast displacement operations interleaved with the dynamical decoupling pulses, the natural bias of this binary detector can be efficiently suppressed. Finally, we introduce the concept of a heralded parity measurement, where a detector click guarantees successful multi-qubit parity detection even for finite detection efficiency.
NASA Astrophysics Data System (ADS)
Tavousi, A.; Mansouri-Birjandi, M. A.
2018-02-01
Implementing intensity-dependent Kerr-like nonlinearity in octagonal-shape photonic crystal ring resonators (OSPCRRs), a new class of optical analog-to-digital converters (ADCs) with low power consumption is presented. Due to its size dependent refractive index, Silicon (Si) nanocrystal is used as nonlinear medium in the proposed ADC. Coding system of optical ADC is based on successive-like approximations which requires only one quantization level to represent each single bit, despite of conventional ADCs that require at least two distinct levels for each bit. Each is representing bit of optical ADC is formed by vertically alignment of double rings of OSPCRRs (DR-OSPCRR) and cascading m number of DR-OSPCRR, forms an m bit ADC. Investigating different parameters of DR-OSPCRR such as refractive indices of rings, lattice refractive index, and coupling coefficients of waveguide-to-ring and ring-to-ring, the ADC's threshold power is tuned. Increasing the number of bits of ADC, increases the overall power consumption of ADC. One can arrange to have any number of bits for this ADC, as long as the power levels are treated carefully. Finite difference time domain (FDTD) in-house codes were used to evaluate the ADC's effectiveness.
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
28-Bit serial word simulator/monitor
NASA Technical Reports Server (NTRS)
Durbin, J. W.
1979-01-01
Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
NASA Technical Reports Server (NTRS)
Recksiedler, A. L.; Lutes, C. L.
1972-01-01
The oligatomic (mirror) thin film memory technology is a suitable candidate for general purpose spaceborne applications in the post-1975 time frame. Capacities of around 10 to the 8th power bits can be reliably implemented with systems designed around a 335 million bit module. The recommended mode was determined following an investigation of implementation sizes ranging from an 8,000,000 to 100,000,000 bits per module. Cost, power, weight, volume, reliability, maintainability and speed were investigated. The memory includes random access, NDRO, SEC-DED, nonvolatility, and dual interface characteristics. The applications most suitable for the technology are those involving a large capacity with high speed (no latency), nonvolatility, and random accessing.
Digital high speed programmable convolver
NASA Astrophysics Data System (ADS)
Rearick, T. C.
1984-12-01
A circuit module for rapidly calculating a discrete numerical convolution is described. A convolution such as finding the sum of the products of a 16 bit constant and a 16 bit variable is performed by a module which is programmable so that the constant may be changed for a new problem. In addition, the module may be programmed to find the sum of the products of 4 and 8 bit constants and variables. RAM (Random Access Memories) are loaded with partial products of the selected constant and all possible variables. Then, when the actual variable is loaded, it acts as an address to find the correct partial product in the particular RAM. The partial products from all of the RAMs are shifted to the appropriate numerical power position (if necessary) and then added in adder elements.
Fault tolerance analysis and applications to microwave modules and MMIC's
NASA Astrophysics Data System (ADS)
Boggan, Garry H.
A project whose objective was to provide an overview of built-in-test (BIT) considerations applicable to microwave systems, modules, and MMICs (monolithic microwave integrated circuits) is discussed. Available analytical techniques and software for assessing system failure characteristics were researched, and the resulting investigation provides a review of two techniques which have applicability to microwave systems design. A system-level approach to fault tolerance and redundancy management is presented in its relationship to the subsystem/element design. An overview of the microwave BIT focus from the Air Force Integrated Diagnostics program is presented. The technical reports prepared by the GIMADS team were reviewed for applicability to microwave modules and components. A review of MIMIC (millimeter and microwave integrated circuit) program activities relative to BIT/BITE is given.
New PDC bit optimizes drilling performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besson, A.; Gudulec, P. le; Delwiche, R.
1996-05-01
The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less
Modular high speed counter employing edge-triggered code
Vanstraelen, Guy F.
1993-06-29
A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a "0" to "1" transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.
Modular high speed counter employing edge-triggered code
Vanstraelen, G.F.
1993-06-29
A high speed modular counter (100) utilizing a novel counting method in which the first bit changes with the frequency of the driving clock, and changes in the higher order bits are initiated one clock pulse after a 0'' to 1'' transition of the next lower order bit. This allows all carries to be known one clock period in advance of a bit change. The present counter is modular and utilizes two types of standard counter cells. A first counter cell determines the zero bit. The second counter cell determines any other higher order bit. Additional second counter cells are added to the counter to accommodate any count length without affecting speed.
Serial data correlator/code translator
NASA Technical Reports Server (NTRS)
Morgan, L. E. (Inventor)
1982-01-01
A system for analyzing asynchronous signals containing bits of information for ensuring the validity of said signals, by sampling each bit of information a plurality of times, and feeding the sampled pieces of bits of information into a sequence controlled is described. The sequence controller has a plurality of maps or programs through which the sampled pieces of bits are stepped so as to identify the particular bit of information and determine the validity and phase of the bit. The step in which the sequence controller is clocked is controlled by a storage register. A data decoder decodes the information fed out of the storage register and feeds such information to shift registers for storage.
Low power signal processing research at Stanford
NASA Technical Reports Server (NTRS)
Burr, J.; Williamson, P. R.; Peterson, A.
1991-01-01
This paper gives an overview of the research being conducted at Stanford University's Space, Telecommunications, and Radioscience Laboratory in the area of low energy computation. It discusses the work we are doing in large scale digital VLSI neural networks, interleaved processor and pipelined memory architectures, energy estimation and optimization, multichip module packaging, and low voltage digital logic.
Integrated infrared and visible image sensors
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2000-01-01
Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.
A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)
Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong
2017-01-01
SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393
Concurrent error detecting codes for arithmetic processors
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.
Fast interrupt platform for extended DOS
NASA Technical Reports Server (NTRS)
Duryea, T. W.
1995-01-01
Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-10-24
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-01-01
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
Dimitriadis, Stavros I; Marimpis, Avraam D
2018-01-01
A brain-computer interface (BCI) is a channel of communication that transforms brain activity into specific commands for manipulating a personal computer or other home or electrical devices. In other words, a BCI is an alternative way of interacting with the environment by using brain activity instead of muscles and nerves. For that reason, BCI systems are of high clinical value for targeted populations suffering from neurological disorders. In this paper, we present a new processing approach in three publicly available BCI data sets: (a) a well-known multi-class ( N = 6) coded-modulated Visual Evoked potential (c-VEP)-based BCI system for able-bodied and disabled subjects; (b) a multi-class ( N = 32) c-VEP with slow and fast stimulus representation; and (c) a steady-state Visual Evoked potential (SSVEP) multi-class ( N = 5) flickering BCI system. Estimating cross-frequency coupling (CFC) and namely δ-θ [δ: (0.5-4 Hz), θ: (4-8 Hz)] phase-to-amplitude coupling (PAC) within sensor and across experimental time, we succeeded in achieving high classification accuracy and Information Transfer Rates (ITR) in the three data sets. Our approach outperformed the originally presented ITR on the three data sets. The bit rates obtained for both the disabled and able-bodied subjects reached the fastest reported level of 324 bits/min with the PAC estimator. Additionally, our approach outperformed alternative signal features such as the relative power (29.73 bits/min) and raw time series analysis (24.93 bits/min) and also the original reported bit rates of 10-25 bits/min . In the second data set, we succeeded in achieving an average ITR of 124.40 ± 11.68 for the slow 60 Hz and an average ITR of 233.99 ± 15.75 for the fast 120 Hz. In the third data set, we succeeded in achieving an average ITR of 106.44 ± 8.94. Current methodology outperforms any previous methodologies applied to each of the three free available BCI datasets.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Dynamics, Stability, and Evolutionary Patterns of Mesoscale Intrathermocline Vortices
2016-12-01
physical oceanography, namely, the link between the basin-scale forcing of the ocean by air-sea fluxes and the dissipation of energy and thermal variance...at the microscale. 14. SUBJECT TERMS Meddy, intrathermocline, double diffusion, energy cascade, eddy, MITgcm, numerical simulation, interleaving...lateral intrusions, lateral diffusivity, heat flux 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18
Preliminary design for a standard 10 sup 7 bit Solid State Memory (SSM)
NASA Technical Reports Server (NTRS)
Hayes, P. J.; Howle, W. M., Jr.; Stermer, R. L., Jr.
1978-01-01
A modular concept with three separate modules roughly separating bubble domain technology, control logic technology, and power supply technology was employed. These modules were respectively the standard memory module (SMM), the data control unit (DCU), and power supply module (PSM). The storage medium was provided by bubble domain chips organized into memory cells. These cells and the circuitry for parallel data access to the cells make up the SMM. The DCU provides a flexible serial data interface to the SMM. The PSM provides adequate power to enable one DCU and one SMM to operate simultaneously at the maximum data rate. The SSM was designed to handle asynchronous data rates from dc to 1.024 Mbs with a bit error rate less than 1 error in 10 to the eight power bits. Two versions of the SSM, a serial data memory and a dual parallel data memory were specified using the standard modules. The SSM specification includes requirements for radiation hardness, temperature and mechanical environments, dc magnetic field emission and susceptibility, electromagnetic compatibility, and reliability.
An ultralow power athermal silicon modulator.
Timurdogan, Erman; Sorace-Agaskar, Cheryl M; Sun, Jie; Shah Hosseini, Ehsan; Biberman, Aleksandr; Watts, Michael R
2014-06-11
Silicon photonics has emerged as the leading candidate for implementing ultralow power wavelength-division-multiplexed communication networks in high-performance computers, yet current components (lasers, modulators, filters and detectors) consume too much power for the high-speed femtojoule-class links that ultimately will be required. Here we demonstrate and characterize the first modulator to achieve simultaneous high-speed (25 Gb s(-1)), low-voltage (0.5 VPP) and efficient 0.9 fJ per bit error-free operation. This low-energy high-speed operation is enabled by a record electro-optic response, obtained in a vertical p-n junction device that at 250 pm V(-1) (30 GHz V(-1)) is up to 10 times larger than prior demonstrations. In addition, this record electro-optic response is used to compensate for thermal drift over a 7.5 °C temperature range with little additional energy consumption (0.24 fJ per bit for a total energy consumption below 1.03 J per bit). The combined results of highly efficient modulation and electro-optic thermal compensation represent a new paradigm in modulator development and a major step towards single-digit femtojoule-class communications.
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
Buset, Jonathan M; El-Sahn, Ziad A; Plant, David V
2012-06-18
We demonstrate an improved overlapped-subcarrier multiplexed (O-SCM) WDM PON architecture transmitting over a single feeder using cost sensitive intensity modulation/direct detection transceivers, data re-modulation and simple electronics. Incorporating electronic equalization and Reed-Solomon forward-error correction codes helps to overcome the bandwidth limitation of a remotely seeded reflective semiconductor optical amplifier (RSOA)-based ONU transmitter. The O-SCM architecture yields greater spectral efficiency and higher bit rates than many other SCM techniques while maintaining resilience to upstream impairments. We demonstrate full-duplex 5 Gb/s transmission over 20 km and analyze BER performance as a function of transmitted and received power. The architecture provides flexibility to network operators by relaxing common design constraints and enabling full-duplex operation at BER ∼ 10(-10) over a wide range of OLT launch powers from 3.5 to 8 dBm.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
NASA Astrophysics Data System (ADS)
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
Two-bit trinary full adder design based on restricted signed-digit numbers
NASA Astrophysics Data System (ADS)
Ahmed, J. U.; Awwal, A. A. S.; Karim, M. A.
1994-08-01
A 2-bit trinary full adder using a restricted set of a modified signed-digit trinary numeric system is designed. When cascaded together to design a multi-bit adder machine, the resulting system is able to operate at a speed independent of the size of the operands. An optical non-holographic content addressable memory based on binary coded arithmetic is considered for implementing the proposed adder.
NASA Astrophysics Data System (ADS)
Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.
2005-09-01
A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.
Local statistics adaptive entropy coding method for the improvement of H.26L VLC coding
NASA Astrophysics Data System (ADS)
Yoo, Kook-yeol; Kim, Jong D.; Choi, Byung-Sun; Lee, Yung Lyul
2000-05-01
In this paper, we propose an adaptive entropy coding method to improve the VLC coding efficiency of H.26L TML-1 codec. First of all, we will show that the VLC coding presented in TML-1 does not satisfy the sibling property of entropy coding. Then, we will modify the coding method into the local statistics adaptive one to satisfy the property. The proposed method based on the local symbol statistics dynamically changes the mapping relationship between symbol and bit pattern in the VLC table according to sibling property. Note that the codewords in the VLC table of TML-1 codec is not changed. Since this changed mapping relationship also derived in the decoder side by using the decoded symbols, the proposed VLC coding method does not require any overhead information. The simulation results show that the proposed method gives about 30% and 37% reduction in average bit rate for MB type and CBP information, respectively.
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagans, K.G.; Clough, R.E.
2000-04-25
An optical key system comprises a battery-operated optical key and an isolated lock that derives both its operating power and unlock signals from the correct optical key. A light emitting diode or laser diode is included within the optical key and is connected to transmit a bit-serial password. The key user physically enters either the code-to-transmit directly, or an index to a pseudorandom number code, in the key. Such person identification numbers can be retained permanently, or ephemeral. When a send button is pressed, the key transmits a beam of light modulated with the password information. The modulated beam ofmore » light is received by a corresponding optical lock with a photovoltaic cell that produces enough power from the beam of light to operate a password-screen digital logic. In one application, an acceptable password allows a two watt power laser diode to pump ignition and timing information over a fiberoptic cable into a sealed engine compartment. The receipt of a good password allows the fuel pump, spark, and starter systems to each operate. Therefore, bypassing the lock mechanism as is now routine with automobile thieves is pointless because the engine is so thoroughly disabled.« less
Hagans, Karla G.; Clough, Robert E.
2000-01-01
An optical key system comprises a battery-operated optical key and an isolated lock that derives both its operating power and unlock signals from the correct optical key. A light emitting diode or laser diode is included within the optical key and is connected to transmit a bit-serial password. The key user physically enters either the code-to-transmit directly, or an index to a pseudorandom number code, in the key. Such person identification numbers can be retained permanently, or ephemeral. When a send button is pressed, the key transmits a beam of light modulated with the password information. The modulated beam of light is received by a corresponding optical lock with a photovoltaic cell that produces enough power from the beam of light to operate a password-screen digital logic. In one application, an acceptable password allows a two watt power laser diode to pump ignition and timing information over a fiberoptic cable into a sealed engine compartment. The receipt of a good password allows the fuel pump, spark, and starter systems to each operate. Therefore, bypassing the lock mechanism as is now routine with automobile thieves is pointless because the engine is so thoroughly disabled.
Simultaneous message framing and error detection
NASA Technical Reports Server (NTRS)
Frey, A. H., Jr.
1968-01-01
Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.
Improved classical and quantum random access codes
NASA Astrophysics Data System (ADS)
Liabøtrø, O.
2017-05-01
A (quantum) random access code ((Q)RAC) is a scheme that encodes n bits into m (qu)bits such that any of the n bits can be recovered with a worst case probability p >1/2 . We generalize (Q)RACs to a scheme encoding n d -levels into m (quantum) d -levels such that any d -level can be recovered with the probability for every wrong outcome value being less than 1/d . We construct explicit solutions for all n ≤d/2m-1 d -1 . For d =2 , the constructions coincide with those previously known. We show that the (Q)RACs are d -parity oblivious, generalizing ordinary parity obliviousness. We further investigate optimization of the success probabilities. For d =2 , we use the measure operators of the previously best-known solutions, but improve the encoding states to give a higher success probability. We conjecture that for maximal (n =4m-1 ,m ,p ) QRACs, p =1/2 {1 +[(√{3}+1)m-1 ] -1} is possible, and show that it is an upper bound for the measure operators that we use. We then compare (n ,m ,pq) QRACs with classical (n ,2 m ,pc) RACs. We can always find pq≥pc , but the classical code gives information about every input bit simultaneously, while the QRAC only gives information about a subset. For several different (n ,2 ,p ) QRACs, we see the same trade-off, as the best p values are obtained when the number of bits that can be obtained simultaneously is as small as possible. The trade-off is connected to parity obliviousness, since high certainty information about several bits can be used to calculate probabilities for parities of subsets.
Communication system analysis for manned space flight
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1977-01-01
One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
NASA Technical Reports Server (NTRS)
Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.
1990-01-01
A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.
A high-speed digital signal processor for atmospheric radar, part 7.3A
NASA Technical Reports Server (NTRS)
Brosnahan, J. W.; Woodard, D. M.
1984-01-01
The Model SP-320 device is a monolithic realization of a complex general purpose signal processor, incorporating such features as a 32-bit ALU, a 16-bit x 16-bit combinatorial multiplier, and a 16-bit barrel shifter. The SP-320 is designed to operate as a slave processor to a host general purpose computer in applications such as coherent integration of a radar return signal in multiple ranges, or dedicated FFT processing. Presently available is an I/O module conforming to the Intel Multichannel interface standard; other I/O modules will be designed to meet specific user requirements. The main processor board includes input and output FIFO (First In First Out) memories, both with depths of 4096 W, to permit asynchronous operation between the source of data and the host computer. This design permits burst data rates in excess of 5 MW/s.
Comparison of three coding strategies for a low cost structure light scanner
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
NASA Astrophysics Data System (ADS)
Gao, Shanghua; Xue, Bing
2017-04-01
The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.
NASA Astrophysics Data System (ADS)
Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.
2011-06-01
In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.
Serial-to-parallel color-TV converter
NASA Technical Reports Server (NTRS)
Doak, T. W.; Merwin, R. B.; Zuckswert, S. E.; Sepper, W.
1976-01-01
Solid analog-to-digital converter eliminates flicker and problems with time base stability and gain variation in sequential color TV cameras. Device includes 3-bit delta modulator; two-field memory; timing, switching, and sync network; and three 3-bit delta demodulators
Space-Shuttle Emulator Software
NASA Technical Reports Server (NTRS)
Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram;
2007-01-01
A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.
An ultralow power athermal silicon modulator
Timurdogan, Erman; Sorace-Agaskar, Cheryl M.; Sun, Jie; Shah Hosseini, Ehsan; Biberman, Aleksandr; Watts, Michael R.
2014-01-01
Silicon photonics has emerged as the leading candidate for implementing ultralow power wavelength–division–multiplexed communication networks in high-performance computers, yet current components (lasers, modulators, filters and detectors) consume too much power for the high-speed femtojoule-class links that ultimately will be required. Here we demonstrate and characterize the first modulator to achieve simultaneous high-speed (25 Gb s−1), low-voltage (0.5 VPP) and efficient 0.9 fJ per bit error-free operation. This low-energy high-speed operation is enabled by a record electro-optic response, obtained in a vertical p–n junction device that at 250 pm V−1 (30 GHz V−1) is up to 10 times larger than prior demonstrations. In addition, this record electro-optic response is used to compensate for thermal drift over a 7.5 °C temperature range with little additional energy consumption (0.24 fJ per bit for a total energy consumption below 1.03 J per bit). The combined results of highly efficient modulation and electro-optic thermal compensation represent a new paradigm in modulator development and a major step towards single-digit femtojoule-class communications. PMID:24915772
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
Scaling vectors of attoJoule per bit modulators
NASA Astrophysics Data System (ADS)
Sorger, Volker J.; Amin, Rubab; Khurgin, Jacob B.; Ma, Zhizhen; Dalir, Hamed; Khan, Sikandar
2018-01-01
Electro-optic modulation performs the conversion between the electrical and optical domain with applications in data communication for optical interconnects, but also for novel optical computing algorithms such as providing nonlinearity at the output stage of optical perceptrons in neuromorphic analog optical computing. While resembling an optical transistor, the weak light-matter-interaction makes modulators 105 times larger compared to their electronic counterparts. Since the clock frequency for photonics on-chip has a power-overhead sweet-spot around tens of GHz, ultrafast modulation may only be required in long-distance communication, not for short on-chip links. Hence, the search is open for power-efficient on-chip modulators beyond the solutions offered by foundries to date. Here, we show scaling vectors towards atto-Joule per bit efficient modulators on-chip as well as some experimental demonstrations of novel plasmonic modulators with sub-fJ/bit efficiencies. Our parametric study of placing different actively modulated materials into plasmonic versus photonic optical modes shows that 2D materials overcompensate their miniscule modal overlap by their unity-high index change. Furthermore, we reveal that the metal used in plasmonic-based modulators not only serves as an electrical contact, but also enables low electrical series resistances leading to near-ideal capacitors. We then discuss the first experimental demonstration of a photon-plasmon-hybrid graphene-based electro-absorption modulator on silicon. The device shows a sub-1 V steep switching enabled by near-ideal electrostatics delivering a high 0.05 dB V-1 μm-1 performance requiring only 110 aJ/bit. Improving on this demonstration, we discuss a plasmonic slot-based graphene modulator design, where the polarization of the plasmonic mode aligns with graphene’s in-plane dimension; where a push-pull dual-gating scheme enables 2 dB V-1 μm-1 efficient modulation allowing the device to be just 770 nm short for 3 dB small signal modulation. Lastly, comparing the switching energy of transistors to modulators shows that modulators based on emerging materials and plasmonic-silicon hybrid integration perform on-par relative to their electronic counter parts. This in turn allows for a device-enabled two orders-of-magnitude improvement of electrical-optical co-integrated network-on-chips over electronic-only architectures. The latter opens technological opportunities in cognitive computing, dynamic data-driven applications systems, and optical analog computer engines including neuromorphic photonic computing.
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Generation and transmission of DPSK signals using a directly modulated passive feedback laser.
Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C
2012-12-10
The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.
Advanced technology satellite demodulator development
NASA Technical Reports Server (NTRS)
Ames, Stephen A.
1989-01-01
Ford Aerospace has developed a proof-of-concept satellite 8 phase shift keying (PSK) modulation and coding system operating in the Time Division Multiple Access (TDMA) mode at a data range of 200 Mbps using rate 5/6 forward error correction coding. The 80 Msps 8 PSK modem was developed in a mostly digital form and is amenable to an ASIC realization in the next phase of development. The codec was developed as a paper design only. The power efficiency goal was to be within 2 dB of theoretical at a bit error rate (BER) of 5x10(exp 7) while the measured implementation loss was 4.5 dB. The bandwidth efficiency goal was 2 bits/sec/Hz while the realized bandwidth efficiency was 1.8 bits/sec/Hz. The burst format used a preamble of only 40 8 PSK symbol times including 32 symbols of all zeros and an eight symbol unique word. The modem and associated special test equipment (STE) were fabricated mostly on a specially designed stitch-weld board although a few of the highest rate circuits were built on printed circuit cards. All the digital circuits were ECL to support the clock rates of from 80 MHz to 360 MHz. The transmitter and receiver matched filters were square-root Nyquist bandpass filters realized at the 3.37 GHz i.f. The modem operated as a coherent system although no analog phase locked (PLL) loop was employed. Within the budgetary constraints of the program, the approach to the demodulator has been proven and is eligible to proceed to the next phase of development of a satellite demodulator engineering model. This would entail the development of an ASIC version of the digital portion of the demodulator, and MMIC version of the quadrature detector, and SAW Nyquist filters to realize the bandwidth efficiency.
NASA Astrophysics Data System (ADS)
Nasution, A. B.; Efendi, S.; Suwilo, S.
2018-04-01
The amount of data inserted in the form of audio samples that use 8 bits with LSB algorithm, affect the value of PSNR which resulted in changes in image quality of the insertion (fidelity). So in this research will be inserted audio samples using 5 bits with MLSB algorithm to reduce the number of data insertion where previously the audio sample will be compressed with Arithmetic Coding algorithm to reduce file size. In this research will also be encryption using Triple DES algorithm to better secure audio samples. The result of this research is the value of PSNR more than 50dB so it can be concluded that the image quality is still good because the value of PSNR has exceeded 40dB.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1985-01-01
Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.
Internet Protocol Security (IPSEC): Testing and Implications on IPv4 and IPv6 Networks
2008-08-27
Message Authentication Code-Message Digest 5-96). Due to the processing power consumption and slowness of public key authentication methods, RSA ...MODP) group with a 768 -bit modulus 2. a MODP group with a 1024-bit modulus 3. an Elliptic Curve Group over GF[ 2n ] (EC2N) group with a 155-bit...nonces, digital signatures using the Digital Signature Algorithm, and the Rivest-Shamir- Adelman ( RSA ) algorithm. For more information about the
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
Rotary steerable motor system for underground drilling
Turner, William E.; Perry, Carl A.; Wassell, Mark E.; Barbely, Jason R.; Burgess, Daniel E.; Cobern, Martin E.
2010-07-27
A preferred embodiment of a system for rotating and guiding a drill bit in an underground bore includes a drilling motor and a drive shaft coupled to drilling motor so that drill bit can be rotated by the drilling motor. The system further includes a guidance module having an actuating arm movable between an extended position wherein the actuating arm can contact a surface of the bore and thereby exert a force on the housing of the guidance module, and a retracted position.
Rotary steerable motor system for underground drilling
Turner, William E [Durham, CT; Perry, Carl A [Middletown, CT; Wassell, Mark E [Kingwood, TX; Barbely, Jason R [Middletown, CT; Burgess, Daniel E [Middletown, CT; Cobern, Martin E [Cheshire, CT
2008-06-24
A preferred embodiment of a system for rotating and guiding a drill bit in an underground bore includes a drilling motor and a drive shaft coupled to drilling motor so that drill bit can be rotated by the drilling motor. The system further includes a guidance module having an actuating arm movable between an extended position wherein the actuating arm can contact a surface of the bore and thereby exert a force on the housing of the guidance module, and a retracted position.
RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA
Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.
2015-01-01
To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244
Bonfanti, A; Ceravolo, M; Zambra, G; Gusmeroli, R; Spinelli, A S; Lacaita, A L; Angotzi, G N; Baranauskas, G; Fadiga, L
2010-01-01
This paper reports a multi-channel neural recording system-on-chip (SoC) with digital data compression and wireless telemetry. The circuit consists of a 16 amplifiers, an analog time division multiplexer, an 8-bit SAR AD converter, a digital signal processor (DSP) and a wireless narrowband 400-MHz binary FSK transmitter. Even though only 16 amplifiers are present in our current die version, the whole system is designed to work with 64 channels demonstrating the feasibility of a digital processing and narrowband wireless transmission of 64 neural recording channels. A digital data compression, based on the detection of action potentials and storage of correspondent waveforms, allows the use of a 1.25-Mbit/s binary FSK wireless transmission. This moderate bit-rate and a low frequency deviation, Manchester-coded modulation are crucial for exploiting a narrowband wireless link and an efficient embeddable antenna. The chip is realized in a 0.35- εm CMOS process with a power consumption of 105 εW per channel (269 εW per channel with an extended transmission range of 4 m) and an area of 3.1 × 2.7 mm(2). The transmitted signal is captured by a digital TV tuner and demodulated by a wideband phase-locked loop (PLL), and then sent to a PC via an FPGA module. The system has been tested for electrical specifications and its functionality verified in in-vivo neural recording experiments.
Low-power silicon-organic hybrid (SOH) modulators for advanced modulation formats.
Lauermann, M; Palmer, R; Koeber, S; Schindler, P C; Korn, D; Wahlbrink, T; Bolten, J; Waldow, M; Elder, D L; Dalton, L R; Leuthold, J; Freude, W; Koos, C
2014-12-01
We demonstrate silicon-organic hybrid (SOH) electro-optic modulators that enable quadrature phase-shift keying (QPSK) and 16-state quadrature amplitude modulation (16QAM) with high signal quality and record-low energy consumption. SOH integration combines highly efficient electro-optic organic materials with conventional silicon-on-insulator (SOI) slot waveguides, and allows to overcome the intrinsic limitations of silicon as an optical integration platform. We demonstrate QPSK and 16QAM signaling at symbol rates of 28 GBd with peak-to-peak drive voltages of 0.6 V(pp). For the 16QAM experiment at 112 Gbit/s, we measure a bit-error ratio of 5.1 × 10⁻⁵ and a record-low energy consumption of only 19 fJ/bit.
Perceptually tuned low-bit-rate video codec for ATM networks
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien
1996-02-01
In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
NASA Technical Reports Server (NTRS)
Davidson, Frederic M.; Sun, Xiaoli
1992-01-01
The performance of a 220 Mbps quaternary pulse position modulation (QPPM) optical communication receiver with a 'Slik' silicon avalanche photodiode (APD) and a wideband transimpedance preamplifier in a small hybrid circuit module was measured. The receiver performance had been poor due to the lack of a wideband and low noise transimpedance preamplifier. With the new APB preamplifier module, the receiver achieved a bit error rate (BER) of 10 exp -6 at an average received input optical signal power of 4.2 nW, which corresponds to an average of 80 received (incident) signal photons per information bit.
High-Speed Soft-Decision Decoding of Two Reed-Muller Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.
1996-01-01
In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RNI subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a hi ch data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.
High-Speed Soft-Decision Decoding of Two Reed-Muller Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.
1996-01-01
In this research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing, a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study, which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating, and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Segregation of Whispered Speech Interleaved with Noise or Speech Maskers
2011-08-01
range over which the talker can be heard. Whispered speech is produced by modulating the flow of air through partially open vocal folds. Because the...source of excitation is turbulent air flow , the acoustic characteristics of whispered speech differs from voiced speech [1, 2]. Despite the acoustic...signals provided by cochlear implants. Two studies investigated the segregation of simultaneously presented whispered vowels [7, 8] in a standard
Code of Federal Regulations, 2013 CFR
2013-10-01
... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...
Code of Federal Regulations, 2014 CFR
2014-10-01
... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...
Code of Federal Regulations, 2010 CFR
2010-10-01
... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...
Code of Federal Regulations, 2012 CFR
2012-10-01
... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...
Code of Federal Regulations, 2011 CFR
2011-10-01
... End Of Message (EOM) Codes. (1) The Preamble and EAS Codes must use Audio Frequency Shift Keying at a rate of 520.83 bits per second to transmit the codes. Mark frequency is 2083.3 Hz and space frequency... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL EMERGENCY ALERT SYSTEM (EAS) Equipment Requirements § 11...
Impact of jammer side information on the performance of anti-jam systems
NASA Astrophysics Data System (ADS)
Lim, Samuel
1992-03-01
The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.