Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
NASA Astrophysics Data System (ADS)
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
Coding performance of the Probe-Orbiter-Earth communication link
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.
1993-01-01
The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
System Design for FEC in Aeronautical Telemetry
2012-03-12
rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Decoder synchronization for deep space missions
NASA Technical Reports Server (NTRS)
Statman, J. I.; Cheung, K.-M.; Chauvin, T. H.; Rabkin, J.; Belongie, M. L.
1994-01-01
The Consultative Committee for Space Data Standards (CCSDS) recommends that space communication links employ a concatenated, error-correcting, channel-coding system in which the inner code is a convolutional (7,1/2) code and the outer code is a (255,223) Reed-Solomon code. The traditional implementation is to perform the node synchronization for the Viterbi decoder and the frame synchronization for the Reed-Solomon decoder as separate, sequential operations. This article discusses a unified synchronization technique that is required for deep space missions that have data rates and signal-to-noise ratios (SNR's) that are extremely low. This technique combines frame synchronization in the bit and symbol domains and traditional accumulated-metric growth techniques to establish a joint frame and node synchronization. A variation on this technique is used for the Galileo spacecraft on its Jupiter-bound mission.
Convolutional coding results for the MVM '73 X-band telemetry experiment
NASA Technical Reports Server (NTRS)
Layland, J. W.
1978-01-01
Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem
NASA Technical Reports Server (NTRS)
Gadwal, Govind R.; Barasch, Ronald S.
1990-01-01
The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Coded spread spectrum digital transmission system design study
NASA Technical Reports Server (NTRS)
Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.
1974-01-01
Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Modulation and coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.
1990-01-01
Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.
Separable concatenated codes with iterative map decoding for Rician fading channels
NASA Technical Reports Server (NTRS)
Lodge, J. H.; Young, R. J.
1993-01-01
Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
2006-06-01
called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Combined coding and delay-throughput analysis for fading channels of mobile satellite communications
NASA Technical Reports Server (NTRS)
Wang, C. C.; Yan, Tsun-Yee
1986-01-01
This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.
Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Truncation Depth Rule-of-Thumb for Convolutional Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
A long constraint length VLSI Viterbi decoder for the DSN
NASA Technical Reports Server (NTRS)
Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.
1988-01-01
A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
2006-08-25
interleaving schemes defined in 802.11a standard, although only 6 Mbps data rate with BPSK and 1/2 Convolutional coding and puncturing is used in our...16-QAM/64-QAM Convolutional Code K = 7 (64 states) K = 7 (64 states) Coding Rates 1/2, 2/3, 3/4 1/2, 2/3, 3/4 Channel Spacing (MHz) 20 10 Signal...Since 3G systems need to be backward compatible with 2G systems, they are a combination of existing and evolved equipments with data rate up to 2 Mbps
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation
NASA Technical Reports Server (NTRS)
Divsalar, D.; Raphaeli, D.
2000-01-01
Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A
2015-05-01
In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
QR code optical encryption using spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.
2017-02-01
Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129 × 129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
The use of interleaving for reducing radio loss in convolutionally coded systems
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Yuen, J. H.
1989-01-01
The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
FEC combined burst-modem for business satellite communications use
NASA Astrophysics Data System (ADS)
Murakami, K.; Miyake, M.; Fuji, T.; Moritani, Y.; Fujino, T.
The authors recently developed two types of FEC (forward error correction) combined modems both applicable to low-data-rate and intermediate-data-rate TDMA international satellite communications. Each FEC combined modem consists of a QPSK (quadrature phase-shift keyed) modem, a convolutional encoder, and a Viterbi decoder. Both modems are designed taking into consideration the fast acquisition of the carrier and bit timing and the low cycle slipping rate in the low-carrier-to-noise-ratio environment. Attention is paid to designing the Viterbi decoder to be operated in a situation in which successive bursts may have different coding rates according to the punctured coding scheme. The overall scheme of the FEC combined modems are presented, and some of the key technologies applied in developing them are outlined. The hardware implementation and experimentation are also discussed. The measured data are compared with results of theoretical analysis, and relatively good performances are obtained.
Processing circuit with asymmetry corrector and convolutional encoder for digital data
NASA Technical Reports Server (NTRS)
Pfiffner, Harold J. (Inventor)
1987-01-01
A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1993-01-01
A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.
Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris
2015-09-01
A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.
Coding/modulation trade-offs for Shuttle wideband data links
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.; Trumpis, B. D.
1974-01-01
This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Boyd, Andrew D; ‘John’ Li, Jianrong; Kenost, Colleen; Joese, Binoy; Min Yang, Young; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A
2015-01-01
In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as “convoluted” by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: “identity” (reciprocal), “class-to-subclass,” “subclass-to-class,” “convoluted,” or “no mapping.” These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible. Web portal: http://www.lussierlab.org/transition-to-ICD9CM/ Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9 PMID:25681260
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Advanced coding and modulation schemes for TDRSS
NASA Technical Reports Server (NTRS)
Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan
1993-01-01
This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.
Capacity, cutoff rate, and coding for a direct-detection optical channel
NASA Technical Reports Server (NTRS)
Massey, J. L.
1980-01-01
It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.
Coding gains and error rates from the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.
DSN telemetry system performance using a maximum likelihood convolutional decoder
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Kemp, R. P.
1977-01-01
Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
NASA Technical Reports Server (NTRS)
Feria, Y.; Cheung, K.-M.
1995-01-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
A method of estimating GPS instrumental biases with a convolution algorithm
NASA Astrophysics Data System (ADS)
Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua
2018-03-01
This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1994-10-01
In a time-varying signal-to-noise ratio (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1995-02-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
125Mbps ultra-wideband system evaluation for cortical implant devices.
Luo, Yi; Winstead, Chris; Chiang, Patrick
2012-01-01
This paper evaluates the performance of a 125Mbps Impulse Ratio Ultra-Wideband (IR-UWB) system for cortical implant devices by using low-Q inductive coil link operating in the near-field domain. We examine design tradeoffs between transmitted signal amplitude, reliability, noise and clock jitter. The IR-UWB system is modeled using measured parameters from a reported UWB transceiver implemented in 90nm-CMOS technology. Non-optimized inductive coupling coils with low-Q value for near-field data transmission are modeled in order to build a full channel from the transmitter (Tx) to the receiver (Rx). On-off keying (OOK) modulation is used together with a low-complexity convolutional error correcting code. The simulation results show that even though the low-Q coils decrease the amplitude of the received pulses, the UWB system can still achieve acceptable performance when error correction is used. These results predict that UWB is a good candidate for delivering high data rates in cortical implant devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
Frame synchronization for the Galileo code
NASA Technical Reports Server (NTRS)
Arnold, S.; Swanson, L.
1991-01-01
Results are reported on the performance of the Deep Space Network's frame synchronizer for the (15,1/4) convolutional code after Viterbi decoding. The threshold is found that optimizes the probability of acquiring true sync within four frames using a strategy that requires next frame verification.
ANNA: A Convolutional Neural Network Code for Spectroscopic Analysis
NASA Astrophysics Data System (ADS)
Lee-Brown, Donald; Anthony-Twarog, Barbara J.; Twarog, Bruce A.
2018-01-01
We present ANNA, a Python-based convolutional neural network code for the automated analysis of stellar spectra. ANNA provides a flexible framework that allows atmospheric parameters such as temperature and metallicity to be determined with accuracies comparable to those of established but less efficient techniques. ANNA performs its parameterization extremely quickly; typically several thousand spectra can be analyzed in less than a second. Additionally, the code incorporates features which greatly speed up the training process necessary for the neural network to measure spectra accurately, resulting in a tool that can easily be run on a single desktop or laptop computer. Thus, ANNA is useful in an era when spectrographs increasingly have the capability to collect dozens to hundreds of spectra each night. This talk will cover the basic features included in ANNA and demonstrate its performance in two use cases: an open cluster abundance analysis involving several hundred spectra, and a metal-rich field star study. Applicability of the code to large survey datasets will also be discussed.
ICD-10 procedure codes produce transition challenges.
Boyd, Andrew D; Li, Jianrong 'John'; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A; Burton, Michael; Smith, Jacob; Lussier, Yves A
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: "identity"(I), "class-to-subclass"(C2S), "subclass-toclass"(S2C), "convoluted(C)", and "no mapping"(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS.
Research on Formation of Microsatellite Communication with Genetic Algorithm
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796
Research on formation of microsatellite communication with genetic algorithm.
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...
2016-01-08
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations performed using the tools we have developed.« less
DeepMoon: Convolutional neural network trainer to identify moon craters
NASA Astrophysics Data System (ADS)
Silburt, Ari; Zhu, Chenchong; Ali-Dib, Mohamad; Menou, Kristen; Jackson, Alan
2018-05-01
DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2014-01-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
Modulation and coding for fast fading mobile satellite communication channels
NASA Technical Reports Server (NTRS)
Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.
1988-01-01
The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1973-01-01
A convolutional coding theory is given for the IME and the Heliocentric spacecraft. The amount of coding gain needed by the mission is determined. Recommendations are given for an encoder/decoder system to provide the gain along with an evaluation of the impact of the system on the space network in terms of costs and complexity.
Rock images classification by using deep convolution neural network
NASA Astrophysics Data System (ADS)
Cheng, Guojian; Guo, Wenhui
2017-08-01
Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
ICD-10 procedure codes produce transition challenges
Boyd, Andrew D.; Li, Jianrong ‘John’; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A.; Burton, Michael; Smith, Jacob; Lussier, Yves A.
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: “identity”(I), “class-to-subclass”(C2S), “subclass-toclass”(S2C), “convoluted(C)”, and “no mapping”(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS PMID:29888037
75 FR 77796 - Airworthiness Directives; Saab AB, Saab Aerosystems Model SAAB 2000 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... of the horizontal stabilizer. Corrosion damage in these areas, if not detected and corrected, can... of the horizontal stabilizer. Corrosion damage in these areas, if not detected and corrected, can... convoluted tubing on the harness, applying corrosion prevention compound to the inspected area, making sure...
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.
2011-01-01
The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.
Applications of deep convolutional neural networks to digitized natural history collections.
Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J
2017-01-01
Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
A digital communications system for manned spaceflight applications.
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.
1973-01-01
A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.
tf_unet: Generic convolutional neural network U-Net implementation in Tensorflow
NASA Astrophysics Data System (ADS)
Akeret, Joel; Chang, Chihway; Lucchi, Aurelien; Refregier, Alexandre
2016-11-01
tf_unet mitigates radio frequency interference (RFI) signals in radio data using a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. The code is not tied to a specific segmentation and can be used, for example, to detect radio frequency interference (RFI) in radio astronomy or galaxies and stars in widefield imaging data. This U-Net implementation can outperform classical RFI mitigation algorithms.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
Applications of deep convolutional neural networks to digitized natural history collections
Frandsen, Paul B.; Dikow, Rebecca B.; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A.; Dorr, Laurence J.
2017-01-01
Abstract Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools. PMID:29200929
Coding for spread spectrum packet radios
NASA Technical Reports Server (NTRS)
Omura, J. K.
1980-01-01
Packet radios are often expected to operate in a radio communication network environment where there tends to be man made interference signals. To combat such interference, spread spectrum waveforms are being considered for some applications. The use of convolutional coding with Viterbi decoding to further improve the performance of spread spectrum packet radios is examined. At 0.00001 bit error rates, improvements in performance of 4 db to 5 db can easily be achieved with such coding without any change in data rate nor spread spectrum bandwidth. This coding gain is more dramatic in an interference environment.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.
Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio
2017-01-01
To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
1996-01-01
Real-Time 19 5 Conclusion 23 List of References 25 ii LIST OF FIGURES FIGURE PAGE 3-1 Test Bench Pseudo Code 7 3-2 Fast Convolution...3-1 shows pseudo - code for a test bench with two application nodes. The outer test bench wrapper consists of three functions: pipeline_init, pipeline...exit_func); Figure 3-1. Test Bench Pseudo Code The application wrapper is contained in the pipeline routine and similarly consists of an
High data rate coding for the space station telemetry links.
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Viterbi, A. J.
1971-01-01
Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.
Viterbi decoder node synchronization losses in the Reed-Solomon/Veterbi concatenated channel
NASA Technical Reports Server (NTRS)
Deutsch, L. J.; Miller, R. L.
1982-01-01
The Viterbi decoders currently used by the Deep Space Network (DSN) employ an algorithm for maintaining node synchronization that significantly degrades at bit signal-to-noise ratios (SNRs) of below 2.0 dB. In a recent report by the authors, it was shown that the telemetry receiving system, which uses a convolutionally encoded downlink, will suffer losses of 0.85 dB and 1.25 dB respectively at Voyager 2 Uranus and Neptune encounters. This report extends the results of that study to a concatenated (255,223) Reed-Solomon/(7, 1/2) convolutionally coded channel, by developing a new radio loss model for the concatenated channel. It is shown here that losses due to improper node synchronization of 0.57 dB at Uranus and 1.0 dB at Neptune can be expected if concatenated coding is used along with an array of one 64-meter and three 34-meter antennas.
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences
NASA Astrophysics Data System (ADS)
Tajima, Masato; Okino, Koji; Miyagoshi, Takashi
In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
2016-08-04
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Advanced imaging communication system
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1977-01-01
Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn
2017-06-21
The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.
Constructing fine-granularity functional brain network atlases via deep convolutional autoencoder.
Zhao, Yu; Dong, Qinglin; Chen, Hanbo; Iraji, Armin; Li, Yujie; Makkie, Milad; Kou, Zhifeng; Liu, Tianming
2017-12-01
State-of-the-art functional brain network reconstruction methods such as independent component analysis (ICA) or sparse coding of whole-brain fMRI data can effectively infer many thousands of volumetric brain network maps from a large number of human brains. However, due to the variability of individual brain networks and the large scale of such networks needed for statistically meaningful group-level analysis, it is still a challenging and open problem to derive group-wise common networks as network atlases. Inspired by the superior spatial pattern description ability of the deep convolutional neural networks (CNNs), a novel deep 3D convolutional autoencoder (CAE) network is designed here to extract spatial brain network features effectively, based on which an Apache Spark enabled computational framework is developed for fast clustering of larger number of network maps into fine-granularity atlases. To evaluate this framework, 10 resting state networks (RSNs) were manually labeled from the sparsely decomposed networks of Human Connectome Project (HCP) fMRI data and 5275 network training samples were obtained, in total. Then the deep CAE models are trained by these functional networks' spatial maps, and the learned features are used to refine the original 10 RSNs into 17 network atlases that possess fine-granularity functional network patterns. Interestingly, it turned out that some manually mislabeled outliers in training networks can be corrected by the deep CAE derived features. More importantly, fine granularities of networks can be identified and they reveal unique network patterns specific to different brain task states. By further applying this method to a dataset of mild traumatic brain injury study, it shows that the technique can effectively identify abnormal small networks in brain injury patients in comparison with controls. In general, our work presents a promising deep learning and big data analysis solution for modeling functional connectomes, with fine granularities, based on fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
The Composite Analytic and Simulation Package or RFI (CASPR) on a coded channel
NASA Technical Reports Server (NTRS)
Freedman, Jeff; Berman, Ted
1993-01-01
CASPR is an analysis package which determines the performance of a coded signal in the presence of Radio Frequency Interference (RFI) and Additive White Gaussian Noise (AWGN). It can analyze a system with convolutional coding, Reed-Solomon (RS) coding, or a concatenation of the two. The signals can either be interleaved or non-interleaved. The model measures the system performance in terms of either the E(sub b)/N(sub 0) required to achieve a given Bit Error Rate (BER) or the BER needed for a constant E(sub b)/N(sub 0).
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
Chromatin accessibility prediction via a hybrid deep convolutional neural network.
Liu, Qiao; Xia, Fei; Yin, Qijin; Jiang, Rui
2018-03-01
A majority of known genetic variants associated with human-inherited diseases lie in non-coding regions that lack adequate interpretation, making it indispensable to systematically discover functional sites at the whole genome level and precisely decipher their implications in a comprehensive manner. Although computational approaches have been complementing high-throughput biological experiments towards the annotation of the human genome, it still remains a big challenge to accurately annotate regulatory elements in the context of a specific cell type via automatic learning of the DNA sequence code from large-scale sequencing data. Indeed, the development of an accurate and interpretable model to learn the DNA sequence signature and further enable the identification of causative genetic variants has become essential in both genomic and genetic studies. We proposed Deopen, a hybrid framework mainly based on a deep convolutional neural network, to automatically learn the regulatory code of DNA sequences and predict chromatin accessibility. In a series of comparison with existing methods, we show the superior performance of our model in not only the classification of accessible regions against background sequences sampled at random, but also the regression of DNase-seq signals. Besides, we further visualize the convolutional kernels and show the match of identified sequence signatures and known motifs. We finally demonstrate the sensitivity of our model in finding causative noncoding variants in the analysis of a breast cancer dataset. We expect to see wide applications of Deopen with either public or in-house chromatin accessibility data in the annotation of the human genome and the identification of non-coding variants associated with diseases. Deopen is freely available at https://github.com/kimmo1019/Deopen. ruijiang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ID card number detection algorithm based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan
2018-04-01
In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.
NASA Technical Reports Server (NTRS)
1981-01-01
A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
NASA Astrophysics Data System (ADS)
Koepferl, Christine M.; Robitaille, Thomas P.
2017-11-01
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.
Ghosh, Arindam; Lee, Jae-Won; Cho, Ho-Shin
2013-01-01
Due to its efficiency, reliability and better channel and resource utilization, cooperative transmission technologies have been attractive options in underwater as well as terrestrial sensor networks. Their performance can be further improved if merged with forward error correction (FEC) techniques. In this paper, we propose and analyze a retransmission protocol named Cooperative-Hybrid Automatic Repeat reQuest (C-HARQ) for underwater acoustic sensor networks, which exploits both the reliability of cooperative ARQ (CARQ) and the efficiency of incremental redundancy-hybrid ARQ (IR-HARQ) using rate-compatible punctured convolution (RCPC) codes. Extensive Monte Carlo simulations are performed to investigate the performance of the protocol, in terms of both throughput and energy efficiency. The results clearly reveal the enhancement in performance achieved by the C-HARQ protocol, which outperforms both CARQ and conventional stop and wait ARQ (S&W ARQ). Further, using computer simulations, optimum values of various network parameters are estimated so as to extract the best performance out of the C-HARQ protocol. PMID:24217359
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P., E-mail: koepferl@usm.lmu.de
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied tomore » compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory . Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.« less
NASA Technical Reports Server (NTRS)
Redinbo, Robert
1994-01-01
Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.
Study of (n,2n) reaction on 191,193Ir isotopes and isomeric cross section ratios
NASA Astrophysics Data System (ADS)
Vlastou, R.; Kalamara, A.; Kokkoris, M.; Patronis, N.; Serris, M.; Georgoulakis, M.; Hassapoglou, S.; Kobothanasis, K.; Axiotis, M.; Lagoyannis, A.
2017-09-01
The cross section of 191Ir(n,2n)190Irg+m1 and 191Ir(n,2n)190Irm2 reactions has been measured at 17.1 and 20.9 MeV neutron energies at the 5.5 MV tandem T11/25 Accelerator Laboratory of NCSR "Demokritos", using the activation method. The neutron beams were produced by means of the 3H(d,n)4He reaction at a flux of the order of 2 × 105 n/cm2s. The neutron flux has been deduced implementing the 27Al(n,α) reaction, while the flux variation of the neutron beam was monitored by using a BF3 detector. The 193Ir(n,2n)192Ir reaction cross section has also been determined, taking into account the contribution from the contaminant 191Ir(n,γ)192Ir reaction. The correction method is based on the existing data in ENDF for the contaminant reaction, convoluted with the neutron spectra which have been extensively studied by means of simulations using the NeusDesc and MCNP codes. Statistical model calculations using the code EMPIRE 3.2.2 and taking into account pre-equilibrium emission, have been performed on the data measured in this work as well as on data reported in literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.
2012-01-15
Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less
Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.
2016-03-01
Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.
Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator
Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; ...
2014-12-08
Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R.D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient transmission of large currents through the MITLs on Z. Taken together, the two studies demonstrate the overall efficient delivery of very large electrical powers through the MITLs on Z.« less
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4
NASA Technical Reports Server (NTRS)
Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.
1998-01-01
Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.
Convolutional neural networks for vibrational spectroscopic data analysis.
Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena
2017-02-15
In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Reduction of metal artifacts in x-ray CT images using a convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Chu, Ying; Yu, Hengyong
2017-09-01
Patients usually contain various metallic implants (e.g. dental fillings, prostheses), causing severe artifacts in the x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past four decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based MAR framework, which combines the information from the original and corrected images to suppress artifacts. Before the MAR, we generate a group of data and train a CNN. First, we numerically simulate various metal artifacts cases and build a dataset, which includes metal-free images (used as references), metal-inserted images and various MAR methods corrected images. Then, ten thousands patches are extracted from the databased to train the metal artifact reduction CNN. In the MAR stage, the original image and two corrected images are stacked as a three-channel input image for CNN, and a CNN image is generated with less artifacts. The water equivalent regions in the CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal affected projections, followed by the FBP reconstruction. Experimental results demonstrate the superior metal artifact reduction capability of the proposed method to its competitors.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.
Yao, Kun; Parkhill, John
2016-03-08
We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.
Fully convolutional network with cluster for semantic segmentation
NASA Astrophysics Data System (ADS)
Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin
2018-04-01
At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.
Chow, J; Leung, M; Van Dyk, J
2008-07-01
This study provides new information on the evaluation of the lung dose calculation algorithms as a function of the relative electron density of lung, ρ e,lung . Doses calculated using the collapsed cone convolution (CCC) and adaptive convolution (AC) algorithm in lung with the Pinnacle 3 system were compared to those calculated using the Monte Carlo (MC) simulation (EGSnrc-based code). Three groups of lung phantoms, namely, "Slab", "Column" and "Cube" with different ρ e,lung (0.05-0.7), positions, volumes and shapes of lung in water were used. 6 and 18MV photon beams with 4×4 and 10×10cm 2 field sizes produced by a Varian 21EX Linac were used in the MC dose calculations. Results show that the CCC algorithm agrees well with AC to within ±1% for doses calculated in the lung phantoms, indicating that the AC, with 3-4 times less computing time required than CCC, is a good substitute for the CCC method. Comparing the CCC and AC with MC, dose deviations are found when ρ e,lung are ⩽0.1-0.3. The degree of deviation depends on the photon beam energy and field size, and is relatively large when high-energy photon beams with small field are used. For the penumbra widths (20%-80%), the CCC and AC agree well with MC for the "Slab" and "Cube" phantoms with the lung volumes at the central beam axis (CAX). However, deviations >2mm occur in the "Column" phantoms, with two lung volumes separated by a water column along the CAX, using the 18MV (4×4cm 2 ) photon beams with ρ e,lung ⩽0.1. © 2008 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo
2017-10-01
We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian
2018-06-29
Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).
High-order moments of spin-orbit energy in a multielectron configuration
NASA Astrophysics Data System (ADS)
Na, Xieyu; Poirier, M.
2016-07-01
In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
A lncRNA Perspective into (Re)Building the Heart.
Frank, Stefan; Aguirre, Aitor; Hescheler, Juergen; Kurian, Leo
2016-01-01
Our conception of the human genome, long focused on the 2% that codes for proteins, has profoundly changed since its first draft assembly in 2001. Since then, an unanticipatedly expansive functionality and convolution has been attributed to the majority of the genome that is transcribed in a cell-type/context-specific manner into transcripts with no apparent protein coding ability. While the majority of these transcripts, currently annotated as long non-coding RNAs (lncRNAs), are functionally uncharacterized, their prominent role in embryonic development and tissue homeostasis, especially in the context of the heart, is emerging. In this review, we summarize and discuss the latest advances in understanding the relevance of lncRNAs in (re)building the heart.
Boyd, Andrew D; Li, Jianrong ‘John’; Burton, Mike D; Jonen, Michael; Gardeux, Vincent; Achour, Ikbel; Luo, Roger Q; Zenku, Ilir; Bahroos, Neil; Brown, Stephen B; Vanden Hoek, Terry; Lussier, Yves A
2013-01-01
Objective Applying the science of networks to quantify the discriminatory impact of the ICD-9-CM to ICD-10-CM transition between clinical specialties. Materials and Methods Datasets were the Center for Medicaid and Medicare Services ICD-9-CM to ICD-10-CM mapping files, general equivalence mappings, and statewide Medicaid emergency department billing. Diagnoses were represented as nodes and their mappings as directional relationships. The complex network was synthesized as an aggregate of simpler motifs and tabulation per clinical specialty. Results We identified five mapping motif categories: identity, class-to-subclass, subclass-to-class, convoluted, and no mapping. Convoluted mappings indicate that multiple ICD-9-CM and ICD-10-CM codes share complex, entangled, and non-reciprocal mappings. The proportions of convoluted diagnoses mappings (36% overall) range from 5% (hematology) to 60% (obstetrics and injuries). In a case study of 24 008 patient visits in 217 emergency departments, 27% of the costs are associated with convoluted diagnoses, with ‘abdominal pain’ and ‘gastroenteritis’ accounting for approximately 3.5%. Discussion Previous qualitative studies report that administrators and clinicians are likely to be challenged in understanding and managing their practice because of the ICD-10-CM transition. We substantiate the complexity of this transition with a thorough quantitative summary per clinical specialty, a case study, and the tools to apply this methodology easily to any clinical practice in the form of a web portal and analytic tables. Conclusions Post-transition, successful management of frequent diseases with convoluted mapping network patterns is critical. The http://lussierlab.org/transition-to-ICD10CM web portal provides insight in linking onerous diseases to the ICD-10 transition. PMID:23645552
Power corrections to TMD factorization for Z-boson production
Balitsky, I.; Tarasov, A.
2018-05-24
A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less
Power corrections to TMD factorization for Z-boson production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balitsky, I.; Tarasov, A.
A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
Performance of convolutionally encoded noncoherent MFSK modem in fading channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1976-01-01
The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
NASA Astrophysics Data System (ADS)
González, J. A.; Guzmán, F. S.
2018-03-01
We present a method for estimating the velocity of a wandering black hole and the equation of state for the gas around it based on a catalog of numerical simulations. The method uses machine-learning methods based on convolutional neural networks applied to the classification of images resulting from numerical simulations. Specifically we focus on the supersonic velocity regime and choose the direction of the black hole to be parallel to its spin. We build a catalog of 900 simulations by numerically solving Euler's equations onto the fixed space-time background of a black hole, for two parameters: the adiabatic index Γ with values in the range [1.1, 5 /3 ], and the asymptotic relative velocity of the black hole with respect to the surroundings v∞, with values within [0.2 ,0.8 ]c . For each simulation we produce a 2D image of the gas density once the process of accretion has approached a stationary regime. The results obtained show that the implemented convolutional neural networks are able to correctly classify the adiabatic index 87.78% of the time within an uncertainty of ±0.0284 , while the prediction of the velocity is correct 96.67% of the time within an uncertainty of ±0.03 c . We expect that this combination of a massive number of numerical simulations and machine-learning methods will help us analyze more complicated scenarios related to future high-resolution observations of black holes, like those from the Event Horizon Telescope.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Wang, Ming-Xiao; Cuevas, Catherina A; Su, Xiao-Tong; Wu, Peng; Gao, Zhong-Xiuzi; Lin, Dao-Hong; McCormick, James A; Yang, Chao-Ling; Wang, Wen-Hui; Ellison, David H
2018-04-01
Kir4.1 in the distal convoluted tubule plays a key role in sensing plasma potassium and in modulating the thiazide-sensitive sodium-chloride cotransporter (NCC). Here we tested whether dietary potassium intake modulates Kir4.1 and whether this is essential for mediating the effect of potassium diet on NCC. High potassium intake inhibited the basolateral 40 pS potassium channel (a Kir4.1/5.1 heterotetramer) in the distal convoluted tubule, decreased basolateral potassium conductance, and depolarized the distal convoluted tubule membrane in Kcnj10flox/flox mice, herein referred to as control mice. In contrast, low potassium intake activated Kir4.1, increased potassium currents, and hyperpolarized the distal convoluted tubule membrane. These effects of dietary potassium intake on the basolateral potassium conductance and membrane potential in the distal convoluted tubule were completely absent in inducible kidney-specific Kir4.1 knockout mice. Furthermore, high potassium intake decreased, whereas low potassium intake increased the abundance of NCC expression only in the control but not in kidney-specific Kir4.1 knockout mice. Renal clearance studies demonstrated that low potassium augmented, while high potassium diminished, hydrochlorothiazide-induced natriuresis in control mice. Disruption of Kir4.1 significantly increased basal urinary sodium excretion but it abolished the natriuretic effect of hydrochlorothiazide. Finally, hypokalemia and metabolic alkalosis in kidney-specific Kir4.1 knockout mice were exacerbated by potassium restriction and only partially corrected by a high-potassium diet. Thus, Kir4.1 plays an essential role in mediating the effect of dietary potassium intake on NCC activity and potassium homeostasis. Copyright © 2017 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.
Cheng, Phillip M; Malhi, Harshawn S
2017-04-01
The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.
USDA-ARS?s Scientific Manuscript database
KCC3 and KCC1 are potassium chloride transporters with partially overlapping function, and KCC3 knockout mice exhibit hypertension. Two KCC3 isoforms differ by alternate promoters and first coding exons: KCC3a is widely expressed, and KCC3b is highly expressed in kidney proximal convoluted tubule. W...
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Digital high speed programmable convolver
NASA Astrophysics Data System (ADS)
Rearick, T. C.
1984-12-01
A circuit module for rapidly calculating a discrete numerical convolution is described. A convolution such as finding the sum of the products of a 16 bit constant and a 16 bit variable is performed by a module which is programmable so that the constant may be changed for a new problem. In addition, the module may be programmed to find the sum of the products of 4 and 8 bit constants and variables. RAM (Random Access Memories) are loaded with partial products of the selected constant and all possible variables. Then, when the actual variable is loaded, it acts as an address to find the correct partial product in the particular RAM. The partial products from all of the RAMs are shifted to the appropriate numerical power position (if necessary) and then added in adder elements.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
The digestive system of the stony coral Stylophora pistillata.
Raz-Bahat, M; Douek, J; Moiseeva, E; Peters, E C; Rinkevich, B
2017-05-01
Because hermatypic species use symbiotic algal photosynthesis, most of the literature in this field focuses on this autotrophic mode and very little research has studied the morphology of the coral's digestive system or the digestion process of particulate food. Using histology and histochemestry, our research reveals that Stylophora pistillata's digestive system is concentrated at the corals' peristome, actinopharynx and mesenterial filaments (MF). We used in-situ hybridization (ISH) of the RNA transcript of the gene that codes for the S. pistillata digestive enzyme, chymotrypsinogen, to shed light on the functionality of the digestive system. Both the histochemistry and the ISH pointed to the MF being specialized digestive organs, equipped with large numbers of acidophilic and basophilic granular gland cells, as well as acidophilic non-granular gland cells, some of which produce chymotrypsinogen. We identified two types of MF: short, trilobed MF and unilobed, long and convoluted MF. Each S. pistillata polyp harbors two long convoluted MF and 10 short MF. While the short MF have neither secreting nor stinging cells, each of the convoluted MF display gradual cytological changes along their longitudinal axis, alternating between stinging and secreting cells and three distinctive types of secretory cells. These observations indicate the important digestive role of the long convoluted MF. They also indicate the existence of novel feeding compartments in the gastric cavity of the polyp, primarily in the nutritionally active peristome, in the actinopharynx and in three regions of the MF that differ from each other in their cellular components, general morphology and chymotrypsinogen excretion.
Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory
NASA Astrophysics Data System (ADS)
Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei
2015-08-01
For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Coded aperture detector: an image sensor with sub 20-nm pixel resolution.
Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick
2014-08-11
We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.
Design study of a HEAO-C spread spectrum transponder telemetry system for use with the TDRSS subnet
NASA Technical Reports Server (NTRS)
Weathers, G.
1975-01-01
The results of a design study of a spread spectrum transponder for use on the HEAO-C satellite were given. The transponder performs the functions of code turn-around for ground range and range-rate determination, ground command receiver, and telemetry data transmitter. The spacecraft transponder and associated communication system components will allow the HEAO-C satellite to utilize the Tracking and Data Relay Satellite System (TDRSS) subnet of the post 1978 STDN. The following areas were discussed in the report: TDRSS Subnet Description, TDRSS-HEAO-C System Configuration, Gold Code Generator, Convolutional Encoder Design and Decoder Algorithm, High Speed Sequence Generators, Statistical Evaluation of Candidate Code Sequences using Amplitude and Phase Moments, Code and Carrier Phase Lock Loops, Total Spread Spectrum Transponder System, and Reference Literature Search.
Viterbi decoding for satellite and space communication.
NASA Technical Reports Server (NTRS)
Heller, J. A.; Jacobs, I. M.
1971-01-01
Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
A noncoherent optical analog image processor.
Swindell, W
1970-11-01
The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.
NASA Astrophysics Data System (ADS)
Adal, Kedir M.; van Etten, Peter G.; Martinez, Jose P.; Rouwen, Kenneth; Vermeer, Koenraad A.; van Vliet, Lucas J.
2017-03-01
Automated detection and quantification of spatio-temporal retinal changes is an important step to objectively assess disease progression and treatment effects for dynamic retinal diseases such as diabetic retinopathy (DR). However, detecting retinal changes caused by early DR lesions such as microaneurysms and dot hemorrhages from longitudinal pairs of fundus images is challenging due to intra and inter-image illumination variation between fundus images. This paper explores a method for automated detection of retinal changes from illumination normalized fundus images using a deep convolutional neural network (CNN), and compares its performance with two other CNNs trained separately on color and green channel fundus images. Illumination variation was addressed by correcting for the variability in the luminosity and contrast estimated from a large scale retinal regions. The CNN models were trained and evaluated on image patches extracted from a registered fundus image set collected from 51 diabetic eyes that were screened at two different time-points. The results show that using normalized images yield better performance than color and green channel images, suggesting that illumination normalization greatly facilitates CNNs to quickly and correctly learn distinctive local image features of DR related retinal changes.
Kanematsu, Nobuyuki
2011-04-01
This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Design and Implementation of Viterbi Decoder Using VHDL
NASA Astrophysics Data System (ADS)
Thakur, Akash; Chattopadhyay, Manju K.
2018-03-01
A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.
Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z
2018-05-01
Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
[Study on phase correction method of spatial heterodyne spectrometer].
Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei
2013-05-01
Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Ross, Dan; Jin, J.-M.; Chatterjee, A.; Volakis, John L.
1991-01-01
Validated results are presented for the new 3D body of revolution finite element boundary integral code. A Fourier series expansion of the vector electric and mangnetic fields is employed to reduce the dimensionality of the system, and the exact boundary condition is employed to terminate the finite element mesh. The mesh termination boundary is chosen such that is leads to convolutional boundary operatores of low O(n) memory demand. Improvements of this code are discussed along with the proposed formulation for a full 3D implementation of the finite element boundary integral method in conjunction with a conjugate gradiant fast Fourier transformation (CGFFT) solution.
Lingafelter, Steven W; Nearns, Eugenio H
2013-01-01
We present an overview of the difficulties sometimes encountered when determining whether a published name following a binomen is available or infrasubspecific and unavailable, following Article 45.6 of the International Code of Zoological Nomenclature (ICZN, 1999). We propose a dichotomous key that facilitates this determination and as a preferable method, given the convoluted and subordinate discussion, exceptions, and qualifications laid out in ICZN (1999: 49-50). Examples and citations are provided for each case one can encounter while making this assessment of availability status of names following the binomen.
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1995-01-01
The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.
Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC.
Zhang, Yongbing; Shen, Tao; Ji, Xiangyang; Zhang, Yun; Xiong, Ruiqin; Dai, Qionghai
2018-08-01
High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software.
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-07-15
Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-01-01
Abstract Motivation: Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k-mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k-mer co-occurrence information with recent advances in deep learning. Results: We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k-mer embedding. We first split DNA sequences into k-mers and pre-train k-mer embedding vectors based on the co-occurrence matrix of k-mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k-mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. Availability and implementation: The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm. Contact: tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:28881969
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa
2017-03-01
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Multiscale deep features learning for land-use scene recognition
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin; Li, Ning
2018-01-01
The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.
Learning Hierarchical Feature Extractors for Image Recognition
2012-09-01
space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed
An introduction to deep learning on biological sequence data: examples and solutions.
Jurtz, Vanessa Isabell; Johansen, Alexander Rosenberg; Nielsen, Morten; Almagro Armenteros, Jose Juan; Nielsen, Henrik; Sønderby, Casper Kaae; Winther, Ole; Sønderby, Søren Kaae
2017-11-15
Deep neural network architectures such as convolutional and long short-term memory networks have become increasingly popular as machine learning tools during the recent years. The availability of greater computational resources, more data, new algorithms for training deep models and easy to use libraries for implementation and training of neural networks are the drivers of this development. The use of deep learning has been especially successful in image recognition; and the development of tools, applications and code examples are in most cases centered within this field rather than within biology. Here, we aim to further the development of deep learning methods within biology by providing application examples and ready to apply and adapt code templates. Given such examples, we illustrate how architectures consisting of convolutional and long short-term memory neural networks can relatively easily be designed and trained to state-of-the-art performance on three biological sequence problems: prediction of subcellular localization, protein secondary structure and the binding of peptides to MHC Class II molecules. All implementations and datasets are available online to the scientific community at https://github.com/vanessajurtz/lasagne4bio. skaaesonderby@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Error Control Techniques for Satellite and Space Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1996-01-01
In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Liu, George S; Zhu, Michael H; Kim, Jinkyung; Raphael, Patrick; Applegate, Brian E; Oghalai, John S
2017-10-01
Detection of endolymphatic hydrops is important for diagnosing Meniere's disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification.
Liu, George S.; Zhu, Michael H.; Kim, Jinkyung; Raphael, Patrick; Applegate, Brian E.; Oghalai, John S.
2017-01-01
Detection of endolymphatic hydrops is important for diagnosing Meniere’s disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification. PMID:29082086
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Design and Development of Basic Physical Layer WiMAX Network Simulation Models
2009-01-01
Wide Web . The third software version was developed during the period of 22 August to 4 November, 2008. The software version developed during the...researched on the Web . The mathematics of some fundamental concepts such as Fourier transforms, convolutional coding techniques were also reviewed...Mathworks Matlab users’ website. A simulation model was found, entitled Estudio y Simulacion de la capa Jisica de la norma 802.16 ( Sistema WiMAX) developed
Semantic segmentation of mFISH images using convolutional networks.
Pardo, Esteban; Morgado, José Mário T; Malpica, Norberto
2018-04-30
Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
Multi-Stage Target Tracking with Drift Correction and Position Prediction
NASA Astrophysics Data System (ADS)
Chen, Xin; Ren, Keyan; Hou, Yibin
2018-04-01
Most existing tracking methods are hard to combine accuracy and performance, and do not consider the shift between clarity and blur that often occurs. In this paper, we propound a multi-stage tracking framework with two particular modules: position prediction and corrective measure. We conduct tracking based on correlation filter with a corrective measure module to increase both performance and accuracy. Specifically, a convolutional network is used for solving the blur problem in realistic scene, training methodology that training dataset with blur images generated by the three blur algorithms. Then, we propose a position prediction module to reduce the computation cost and make tracker more capable of fast motion. Experimental result shows that our tracking method is more robust compared to others and more accurate on the benchmark sequences.
Cracking the code: the accuracy of coding shoulder procedures and the repercussions.
Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M
2013-05-01
Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Is a Genome a Codeword of an Error-Correcting Code?
Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo
2012-01-01
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes
NASA Astrophysics Data System (ADS)
Schurtz, Guy
2000-10-01
Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.
Backscatter correction factor for megavoltage photon beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Yida; Zhu, Timothy C.
2011-10-15
Purpose: For routine clinical dosimetry of photon beams, it is often necessary to know the minimum thickness of backscatter phantom material to ensure that full backscatter condition exists. Methods: In case of insufficient backscatter thickness, one can determine the backscatter correction factor, BCF(s,d,t), defined as the ratio of absorbed dose measured on the central-axis of a phantom with backscatter thickness of t to that with full backscatter for square field size s and forward depth d. Measurements were performed in SAD geometry for 6 and 15 MV photon beams using a 0.125 cc thimble chamber for field sizes between 10more » x 10 and 30 x 30 cm at depths between d{sub max} (1.5 cm for 6 MV and 3 cm for 15 MV) and 20 cm. Results: A convolution method was used to calculate BCF using Monte-Carlo simulated point-spread kernels generated for clinical photon beams for energies between Co-60 and 24 MV. The convolution calculation agrees with the experimental measurements to within 0.8% with the same physical trend. The value of BCF deviates more from 1 for lower energies and larger field sizes. According to our convolution calculation, the minimum BCF occurs at forward depth d{sub max} and 40 x 40 cm field size, 0.970 for 6 MV and 0.983 for 15 MV. Conclusions: The authors concluded that backscatter thickness is 6.0 cm for 6 MV and 4.0 cm for 15 MV for field size up to 10 x 10 cm when BCF = 0.998. If 4 cm backscatter thickness is used, BCF is 0.997 and 0.983 for field size of 10 x 10 and 40 x 40 cm for 6 MV, and is 0.998 and 0.990 for 10 x 10 and 40 x 40 cm for 15 MV, respectively.« less
Coding "Corrective Recasts": The Maintenance of Meaning and More Fundamental Problems
ERIC Educational Resources Information Center
Hauser, Eric
2005-01-01
A fair amount of descriptive research in the field of second language acquisition has looked at the presence of what have been labeled corrective recasts. This research has relied on the methodological practice of coding to identify particular turns as "corrective recasts." Often, the coding criteria make use of the notion of the maintenance of…
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian
2017-06-01
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
Documents Pertaining to Resource Conservation and Recovery Act Corrective Action Event Codes
Document containing RCRA Corrective Action event codes and definitions, including national requirements, initiating sources, dates, and guidance, from the first facility assessment until the Corrective Action is terminated.
Low inductance diode design of the Proto 2 accelerator for imploding plasma loads
NASA Astrophysics Data System (ADS)
Hsing, W. W.; Coats, R.; McDaniel, D. H.; Spielman, R. B.
A new water transmission line convolute, single piece insulator, and double accelerator. The water transmission lines have a 5 cm gap to eliminate any water arcing. A two-dimensional magnetic field code was used to calculate the convolute inductance. An acrylic insulator was used as well as a single piece, laminated polycarbonate insulator. They have been successfully tested at over 90% of the Shipman criteria for classical insulator breakdown, although the laminations in the polycarbonate insulator failed after a few shots. The anode and cathode each have two pieces and are held together mechanically. The vacuum MITL tapers to a 3 mm minimum gap. The total inductance is 8.4 nH for gas puff loads and 7.8 nH for imploding foil loads. Out of a forward-going energy of 290 kJ, 175 kJ has been delivered past the insulator, and 100 kJ has been successfully delivered to the load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Optimizing the Galileo space communication link
NASA Technical Reports Server (NTRS)
Statman, J. I.
1994-01-01
The Galileo mission was originally designed to investigate Jupiter and its moons utilizing a high-rate, X-band (8415 MHz) communication downlink with a maximum rate of 134.4 kb/sec. However, following the failure of the high-gain antenna (HGA) to fully deploy, a completely new communication link design was established that is based on Galileo's S-band (2295 MHz), low-gain antenna (LGA). The new link relies on data compression, local and intercontinental arraying of antennas, a (14,1/4) convolutional code, a (255,M) variable-redundancy Reed-Solomon code, decoding feedback, and techniques to reprocess recorded data to greatly reduce data losses during signal acquisition. The combination of these techniques will enable return of significant science data from the mission.
Performance of noncoherent MFSK channels with coding
NASA Technical Reports Server (NTRS)
Butman, S. A.; Lyon, R. F.
1974-01-01
Computer simulation of data transmission over a noncoherent channel with predetection signal-to-noise ratio of 1 shows that convolutional coding can reduce the energy requirement by 4.5 dB at a bit error rate of 0.001. The effects of receiver quantization and choice of number of tones are analyzed; nearly optimum performance is attained with eight quantization levels and sixteen tones at predetection S/N ratio of 1. The effects of changing predetection S/N ratio are also analyzed; for lower predetection S/N ratio, accurate extrapolations can be made from the data, but for higher values, the results are more complicated. These analyses will be useful in designing telemetry systems when coherence is limited by turbulence in the signal propagation medium or oscillator instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, MeiYue; Lin, Lin; Yang, Chao
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
1977-09-01
to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent
Minimizing embedding impact in steganography using trellis-coded quantization
NASA Astrophysics Data System (ADS)
Filler, Tomáš; Judas, Jan; Fridrich, Jessica
2010-01-01
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
UNIPIC code for simulations of high power microwave devices
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze
2009-03-01
In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.
Deblurring adaptive optics retinal images using deep convolutional neural networks.
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-12-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.
Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas
2013-10-01
The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.
Deblurring adaptive optics retinal images using deep convolutional neural networks
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-01-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496
Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Szadkowski, Mateusz
2017-08-01
This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge
NASA Technical Reports Server (NTRS)
Saini, Subhash
1995-01-01
Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Sharpening spots: correcting for bleedover in cDNA array images.
Therneau, Terry; Tschumper, Renee C; Jelinek, Diane
2002-03-01
For cDNA array methods that depend on imaging of a radiolabel, we show that bleedover of one spot onto another, due to the gap between the array and the imaging media, can be a major problem. The images can be sharpened, however, using a blind convolution method based on the EM algorithm. The sharpened images look like a set of donuts, which concurs with our knowledge of the spotting process. Oversharpened images are actually useful as well, in locating the centers of each spot.
Solving constant-coefficient differential equations with dielectric metamaterials
NASA Astrophysics Data System (ADS)
Zhang, Weixuan; Qu, Che; Zhang, Xiangdong
2016-07-01
Recently, the concept of metamaterial analog computing has been proposed (Silva et al 2014 Science 343 160-3). Some mathematical operations such as spatial differentiation, integration, and convolution, have been performed by using designed metamaterial blocks. Motivated by this work, we propose a practical approach based on dielectric metamaterial to solve differential equations. The ordinary differential equation can be solved accurately by the correctly designed metamaterial system. The numerical simulations using well-established numerical routines have been performed to successfully verify all theoretical analyses.
ITA, a portable program for the interactive analysis of data from tracer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, R.; Ashley, K.
ITA is a portable program for analyzing data from tracer experiments, most of the mathematical and graphical work being carried out by subroutines from the NAG and DASL libraries. The program can be used in batch or interactive mode, commands being typed in an English-like language, in free format. Data can be entered from a terminal keyboard or read from a file, and can be validated by printing or plotting them. Erroneous values can be corrected by appropriate editing. Analysis can involve elementary statistics, multiple-isotope crossover corrections, convolution or deconvolution, polyexponential curve-fitting, spline interpolation and/or compartmental analysis. On those installationsmore » with the appropriate hardware, high-resolution graphs can be drawn.« less
Towards self-correcting quantum memories
NASA Astrophysics Data System (ADS)
Michnicki, Kamil
This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.
Error-correcting pairs for a public-key cryptosystem
NASA Astrophysics Data System (ADS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
Full Duplex, Spread Spectrum Radio System
NASA Technical Reports Server (NTRS)
Harvey, Bruce A.
2000-01-01
The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.
NEQAIRv14.0 Release Notes: Nonequilibrium and Equilibrium Radiative Transport Spectra Program
NASA Technical Reports Server (NTRS)
Brandis, Aaron Michael; Cruden, Brett A.
2014-01-01
NEQAIR v14.0 is the first parallelized version of NEQAIR. Starting from the last version of the code that went through the internal software release process at NASA Ames (NEQAIR 2008), there have been significant updates to the physics in the code and the computational efficiency. NEQAIR v14.0 supersedes NEQAIR v13.2, v13.1 and the suite of NEQAIR2009 versions. These updates have predominantly been performed by Brett Cruden and Aaron Brandis from ERC Inc at NASA Ames Research Center in 2013 and 2014. A new naming convention is being adopted with this current release. The current and future versions of the code will be named NEQAIR vY.X. The Y will refer to a major release increment. Minor revisions and update releases will involve incrementing X. This is to keep NEQAIR more in line with common software release practices. NEQAIR v14.0 is a standalone software tool for line-by-line spectral computation of radiative intensities and/or radiative heat flux, with one-dimensional transport of radiation. In order to accomplish this, NEQAIR v14.0, as in previous versions, requires the specification of distances (in cm), temperatures (in K) and number densities (in parts/cc) of constituent species along lines of sight. Therefore, it is assumed that flow quantities have been extracted from flow fields computed using other tools, such as CFD codes like DPLR or LAURA, and that lines of sight have been constructed and written out in the format required by NEQAIR v14.0. There are two principal modes for running NEQAIR v14.0. In the first mode NEQAIR v14.0 is used as a tool for creating synthetic spectra of any desired resolution (including convolution with a specified instrument/slit function). The first mode is typically exercised in simulating/interpreting spectroscopic measurements of different sources (e.g. shock tube data, plasma torches, etc.). In the second mode, NEQAIR v14.0 is used as a radiative heat flux prediction tool for flight projects. Correspondingly, NEQAIR has also been used to simulate the radiance measured on previous flight missions. This report summarizes the database updates, corrections that have been made to the code, changes to input files, parallelization, the current usage recommendations, including test cases, and an indication of the performance enhancements achieved.
Compensation of long-range process effects on photomasks by design data correction
NASA Astrophysics Data System (ADS)
Schneider, Jens; Bloecker, Martin; Ballhorn, Gerd; Belic, Nikola; Eisenmann, Hans; Keogan, Danny
2002-12-01
CD requirements for advanced photomasks are getting very demanding for the 100 nm-node and below; the ITRS roadmap requires CD uniformities below 10 nm for the most critical layers. To reach this goal, statistical as well as systematic CD contributions must be minimized. Here, we focus on the reduction of systematic CD variations across the masks that may be caused by process effects, e.g. dry etch loading. We address this topic by compensating such effects via design data correction analogous to proximity correction. Dry etch loading is modeled by gaussian convolution of pattern densities. Data correction is done geometrically by edge shifting. As the effect amplitude has an order of magnitude of 10 nm this can only be done on e-beam writers with small address grids to reduce big CD steps in the design data. We present modeling and correction results for special mask patterns with very strong pattern density variations showing that the compensation method is able to reduce CD uniformity by 50-70% depending on pattern details. The data correction itself is done with a new module developed especially to compensate long-range effects and fits nicely into the common data flow environment.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
DNCON2: improved protein contact prediction using two-level deep convolutional neural networks.
Adhikari, Badri; Hou, Jie; Cheng, Jianlin
2018-05-01
Significant improvements in the prediction of protein residue-residue contacts are observed in the recent years. These contacts, predicted using a variety of coevolution-based and machine learning methods, are the key contributors to the recent progress in ab initio protein structure prediction, as demonstrated in the recent CASP experiments. Continuing the development of new methods to reliably predict contact maps is essential to further improve ab initio structure prediction. In this paper we discuss DNCON2, an improved protein contact map predictor based on two-level deep convolutional neural networks. It consists of six convolutional neural networks-the first five predict contacts at 6, 7.5, 8, 8.5 and 10 Å distance thresholds, and the last one uses these five predictions as additional features to predict final contact maps. On the free-modeling datasets in CASP10, 11 and 12 experiments, DNCON2 achieves mean precisions of 35, 50 and 53.4%, respectively, higher than 30.6% by MetaPSICOV on CASP10 dataset, 34% by MetaPSICOV on CASP11 dataset and 46.3% by Raptor-X on CASP12 dataset, when top L/5 long-range contacts are evaluated. We attribute the improved performance of DNCON2 to the inclusion of short- and medium-range contacts into training, two-level approach to prediction, use of the state-of-the-art optimization and activation functions, and a novel deep learning architecture that allows each filter in a convolutional layer to access all the input features of a protein of arbitrary length. The web server of DNCON2 is at http://sysbio.rnet.missouri.edu/dncon2/ where training and testing datasets as well as the predictions for CASP10, 11 and 12 free-modeling datasets can also be downloaded. Its source code is available at https://github.com/multicom-toolbox/DNCON2/. chengji@missouri.edu. Supplementary data are available at Bioinformatics online.
TU-D-209-02: A Backscatter Point Spread Function for Entrance Skin Dose Determination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vijayan, S; Xiong, Z; Shankar, A
Purpose: To determine the distribution of backscattered radiation to the skin resulting from a non-uniform distribution of primary radiation through convolution with a backscatter point spread function (PSF). Methods: A backscatter PSF is determined using Monte Carlo simulation of a 1 mm primary beam incident on a 30 × 30 cm × 20 cm thick PMMA phantom using EGSnrc software. A primary profile is similarly obtained without the phantom and the difference from the total provides the backscatter profile. This scatter PSF characterizes the backscatter spread for a “point” primary interaction and can be convolved with the entrance primary dosemore » distribution to obtain the total entrance skin dose. The backscatter PSF was integrated into the skin dose tracking system (DTS), a graphical utility for displaying the color-coded skin dose distribution on a 3D graphic of the patient during interventional fluoroscopic procedures. The backscatter convolution method was validated for the non-uniform beam resulting from the use of an ROI attenuator. The ROI attenuator is a copper sheet with about 20% primary transmission (0.7 mm thick) containing a circular aperture; this attenuator is placed in the beam to reduce dose in the periphery while maintaining full dose in the region of interest. The DTS calculated primary plus backscatter distribution is compared to that measured with GafChromic film and that calculated using EGSnrc Monte-Carlo software. Results: The PSF convolution method used in the DTS software was able to account for the spread of backscatter from the ROI region to the region under the attenuator. The skin dose distribution determined using DTS with the ROI attenuator was in good agreement with the distributions measured with Gafchromic film and determined by Monte Carlo simulation Conclusion: The PSF convolution technique provides an accurate alternative for entrance skin dose determination with non-uniform primary x-ray beams. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-10-27
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.
BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements
Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang
2017-01-01
This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback
NASA Astrophysics Data System (ADS)
Mehlführer, Christian; Caban, Sebastian; Rupp, Markus
2007-12-01
We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Preissl, Sebastian; Fang, Rongxin; Huang, Hui; Zhao, Yuan; Raviram, Ramya; Gorkin, David U; Zhang, Yanxiao; Sos, Brandon C; Afzal, Veena; Dickel, Diane E; Kuan, Samantha; Visel, Axel; Pennacchio, Len A; Zhang, Kun; Ren, Bing
2018-03-01
In the version of this article initially published online, the accession code was given as GSE1000333. The correct code is GSE100033. The error has been corrected in the print, HTML and PDF versions of the article.
Low rank approximation in G 0W 0 calculations
Shao, MeiYue; Lin, Lin; Yang, Chao; ...
2016-06-04
The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less
Norton, G V; Novarini, J C
2007-06-01
Ultrasonic imaging in medical applications involves propagation and scattering of acoustic waves within and by biological tissues that are intrinsically dispersive. Analytical approaches for modeling propagation and scattering in inhomogeneous media are difficult and often require extremely simplifying approximations in order to achieve a solution. To avoid such approximations, the direct numerical solution of the wave equation via the method of finite differences offers the most direct tool, which takes into account diffraction and refraction. It also allows for detailed modeling of the real anatomic structure and combination/layering of tissues. In all cases the correct inclusion of the dispersive properties of the tissues can make the difference in the interpretation of the results. However, the inclusion of dispersion directly in the time domain proved until recently to be an elusive problem. In order to model the transient signal a convolution operator that takes into account the dispersive characteristics of the medium is introduced to the linear wave equation. To test the ability of this operator to handle scattering from localized scatterers, in this work, two-dimensional numerical modeling of scattering from an infinite cylinder with physical properties associated with biological tissue is calculated. The numerical solutions are compared with the exact solution synthesized from the frequency domain for a variety of tissues having distinct dispersive properties. It is shown that in all cases, the use of the convolutional propagation operator leads to the correct solution for the scattered field.
Hirasawa, Toshiaki; Aoyama, Kazuharu; Tanimoto, Tetsuya; Ishihara, Soichiro; Shichijo, Satoki; Ozawa, Tsuyoshi; Ohnishi, Tatsuya; Fujishiro, Mitsuhiro; Matsuo, Keigo; Fujisaki, Junko; Tada, Tomohiro
2018-07-01
Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images. A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN. The CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface. The constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists.
Convolutional Sparse Coding for RGB+NIR Imaging.
Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon
2018-04-01
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
Piano Transcription with Convolutional Sparse Lateral Inhibition
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
2017-02-08
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
Piano Transcription with Convolutional Sparse Lateral Inhibition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation
Amidi, Afshine; Megalooikonomou, Vasileios; Paragios, Nikos
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet. PMID:29740518
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation.
Amidi, Afshine; Amidi, Shervine; Vlachakis, Dimitrios; Megalooikonomou, Vasileios; Paragios, Nikos; Zacharaki, Evangelia I
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet.
Estimation of neutron energy distributions from prompt gamma emissions
NASA Astrophysics Data System (ADS)
Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.
2017-11-01
A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.
A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network
Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing
2018-01-01
Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
A pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun
1988-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1975-01-01
The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.
Reed-Solomon Codes and the Deep Hole Problem
NASA Astrophysics Data System (ADS)
Keti, Matt
In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.
The NASA Spacecraft Transponding Modem
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Kayalar, Selahattin; Perret, Jonathan D.
2000-01-01
A new deep space transponder is being developed by the Jet Propulsion Laboratory for NASA. The Spacecraft Transponding Modem (STM) implements the standard transponder functions and the channel service functions that have previously resided in spacecraft Command/Data Subsystems. The STM uses custom ASICs, MMICs, and MCMs to reduce the active device parts count to 70, mass to I kg, and volume to 524 cc. The first STMs will be flown on missions launching in the 2003 time frame. The STM tracks an X-band uplink signal and provides both X-band and Ka-band downlinks, either coherent or non-coherent with the uplink. A NASA standard Command Detector Unit is integrated into the STM, along with a codeblock processor and a hardware command decoder. The decoded command codeblocks are output to the spacecraft command/data subsystem. Virtual Channel 0 (VC-0) (hardware) commands are processed and output as critical controller (CRC) commands. Downlink telemetry is received from the spacecraft data subsystem as telemetry frames. The STM provides the following downlink coding options: the standard CCSDS (7-1/2) convolutional coding, ReedSolomon coding with interleave depths one and five, (15-1/6) convolutional coding, and Turbo coding with rates 1/3 and 1/6. The downlink symbol rates can be linearly ramped to match the G/T curve of the receiving station, providing up to a 1 dB increase in data return. Data rates range from 5 bits per second (bps) to 24 Mbps, with three modulation modes provided: modulated subcarrier (3 different frequencies provided), biphase-L modulated direct on carrier, and Offset QPSK. Also, the capability to generate one of four non-harmonically related telemetry beacon tones is provided, to allow for a simple spacecraft status monitoring scheme for cruise phases of missions. Three ranging modes are provided: standard turn around ranging, regenerative pseudo-noise (PN) ranging, and Differential One-way Ranging (DOR) tones. The regenerative ranging provides the capability of increasing the ground received ranging SNR by up to 30 dB. Two different avionics interfaces to the command/data subsystem's data bus are provided: a MIL STD 1553B bus or an industry standard PCI interface. Digital interfaces provide the capability to control antenna selection (e.g., switching between high gain and low gain antennas) and antenna pointing (for future steered Ka-band antennas).
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Convoluted Quasi Sturmian basis for the two-electron continuum
NASA Astrophysics Data System (ADS)
Ancarani, Lorenzo Ugo; Zaytsev, A. S.; Zaytsev, S. A.
2016-09-01
In the construction of solutions for the Coulomb three-body scattering problem one encounters a series of mathematical and numerical difficulties, one of which are the cumbersome boundary conditions the wave function should obey. We propose to describe a Coulomb three-body system continuum with a set of two-particle functions, named Convoluted Quasi Sturmian (CQS) in. They are built using recently introduced Quasi Sturmian (QS) functions which have the merit of possessing a closed form. Unlike a simple product of two one-particle functions, by construction, the CQS functions look asymptotically like a six-dimensional outgoing spherical wave. The proposed CQS basis is tested through the study of the double ionization of helium by high-energy electron impact in the framework of the Temkin-Poet model. An adequate logarithmic-like phase factor is further included in order to take into account the Coulomb interelectronic interaction and formally build the correct asymptotic behavior when all interparticle distances are large. With such a phase-factor (that can be easily extended to take into account higher partial waves) rapid convergence of the expansion can be obtained.
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels
NASA Technical Reports Server (NTRS)
Moher, Michael L.; Lodge, John H.
1990-01-01
A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.
STDP-based spiking deep convolutional neural networks for object recognition.
Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée
2018-03-01
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-01-01
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-03-16
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Reed-Solomon error-correction as a software patch mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendley, Kevin D.
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Using Correction Codes to Enhance Understanding of 4-Parts Harmony
ERIC Educational Resources Information Center
Engür, Doruk
2018-01-01
The effective ways of error correction in teaching musical harmony have been neglected. Making students realize their mistakes and have them think over them are assumed to be helpful in harmony teaching. In this sense, correction code technique is thought to be beneficial for students to realize their mistakes and solve them on their own. Forty…
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A small terminal for satellite communication systems
NASA Technical Reports Server (NTRS)
Xiong, Fuqin; Wu, Dong; Jin, Min
1994-01-01
A small portable, low-cost satellite communications terminal system incorporating a modulator/demodulator and convolutional-Viterbi coder/decoder is described. Advances in signal processing and error-correction techniques in combination with higher power and higher frequencies aboard satellites allow for more efficient use of the space segment. This makes it possible to design small economical earth stations. The Advanced Communications Technology Satellite (ACTS) was chosen to test the system. ACTS, operating at the Ka band incorporates higher power, higher frequency, frequency and spatial reuse using spot beams and polarization.
Dental Faculty Accuracy When Using Diagnostic Codes: A Pilot Study.
Sutton, Jeanne C; Fay, Rose-Marie; Huynh, Carolyn P; Johnson, Cleverick D; Zhu, Liang; Quock, Ryan L
2017-05-01
The aim of this study was to examine the accuracy of dental faculty members' utilization of diagnostic codes and resulting treatment planning based on radiographic interproximal tooth radiolucencies. In 2015, 50 full-time and part-time general dentistry faculty members at one U.S. dental school were shown a sequence of 15 bitewing radiographs; one interproximal radiolucency was highlighted on each bitewing. For each radiographic lesion, participants were asked to choose the most appropriate diagnostic code (from a concise list of five codes, corresponding to lesion progression to outer/inner halves of enamel and outer/middle/pulpal thirds of dentin), acute treatment (attempt to arrest/remineralize non-invasively, operative intervention, or no treatment), and level of confidence in choices. Diagnostic and treatment choices of participants were compared to "gold standard" correct responses, as determined by expert radiology and operative faculty members, respectively. The majority of the participants selected the correct diagnostic code for lesions in the outer one-third of dentin (p<0.0001) and the pulpal one-third of dentin (p<0.0001). For lesions in the outer and inner halves of enamel and the middle one-third of dentin, the correct rates were moderate. However, the majority of the participants chose correct treatments on all types of lesions (correct rate 63.6-100%). Faculty members' confidence in their responses was generally high for all lesions, all above 90%. Diagnostic codes were appropriately assigned by participants for the very deepest lesions, but they were not assigned accurately for more incipient lesions (limited to enamel). Paradoxically, treatment choices were generally correct, regardless of diagnostic choices. Further calibration is needed to improve faculty use and teaching of diagnostic codes.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
Caskey, Rachel N; Abutahoun, Angelos; Polick, Anne; Barnes, Michelle; Srivastava, Pavan; Boyd, Andrew D
2018-05-04
The US health care system uses diagnostic codes for billing and reimbursement as well as quality assessment and measuring clinical outcomes. The US transitioned to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) on October, 2015. Little is known about the impact of ICD-10-CM on internal medicine and medicine subspecialists. We used a state-wide data set from Illinois Medicaid specified for Internal Medicine providers and subspecialists. A total of 3191 ICD-9-CM codes were used for 51,078 patient encounters, for a total cost of US $26,022,022 for all internal medicine. We categorized all of the ICD-9-CM codes based on the complexity of mapping to ICD-10-CM as codes with complex mapping could result in billing or administrative errors during the transition. Codes found to have complex mapping and frequently used codes (n = 295) were analyzed for clinical accuracy of mapping to ICD-10-CM. Each subspecialty was analyzed for complexity of codes used and proportion of reimbursement associated with complex codes. Twenty-five percent of internal medicine codes have convoluted mapping to ICD-10-CM, which represent 22% of Illinois Medicaid patients, and 30% of reimbursements. Rheumatology and Endocrinology had the greatest proportion of visits and reimbursement associated with complex codes. We found 14.5% of ICD-9-CM codes used by internists, when mapped to ICD-10-CM, resulted in potential clinical inaccuracies. We identified that 43% of diagnostic codes evaluated and used by internists and that account for 14% of internal medicine reimbursements are associated with codes which could result in administrative errors.
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
NASA Astrophysics Data System (ADS)
Caporale, F.; Celiberto, F. G.; Chachamis, G.; Gómez, D. Gordo; Vera, A. Sabio
2017-04-01
Recently, a new family of observables consisting of azimuthal-angle generalized ratios was proposed in a kinematical setup that resembles the usual Mueller-Navelet jets but with an additional tagged jet in the central region of rapidity. Nontagged minijet activity between the three jets can affect significantly the azimuthal angle orientation of the jets and is accounted for by the introduction of two Balitsky-Fadin-Kuraev- Lipatov (BFKL) gluon Green functions. Here, we calculate the, presumably, most relevant higher order corrections to the observables by now convoluting the three leading order jet vertices with two gluon Green functions at next-to-leading logarithmic approximation. The corrections appear to be mostly moderate, giving us confidence that the recently proposed observables are actually an excellent way to probe the BFKL dynamics at the LHC. Furthermore, we allow for the jets to take values in different rapidity bins in various configurations such that a comparison between our predictions and the experimental data is a straightforward task.
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Automated Agatston score computation in non-ECG gated CT scans using deep learning
NASA Astrophysics Data System (ADS)
Cano-Espinosa, Carlos; González, Germán.; Washko, George R.; Cazorla, Miguel; San José Estépar, Raúl
2018-03-01
Introduction: The Agatston score is a well-established metric of cardiovascular disease related to clinical outcomes. It is computed from CT scans by a) measuring the volume and intensity of the atherosclerotic plaques and b) aggregating such information in an index. Objective: To generate a convolutional neural network that inputs a non-contrast chest CT scan and outputs the Agatston score associated with it directly, without a prior segmentation of Coronary Artery Calcifications (CAC). Materials and methods: We use a database of 5973 non-contrast non-ECG gated chest CT scans where the Agatston score has been manually computed. The heart of each scan is cropped automatically using an object detector. The database is split in 4973 cases for training and 1000 for testing. We train a 3D deep convolutional neural network to regress the Agatston score directly from the extracted hearts. Results: The proposed method yields a Pearson correlation coefficient of r = 0.93; p <= 0.0001 against manual reference standard in the 1000 test cases. It further stratifies correctly 72.6% of the cases with respect to standard risk groups. This compares to more complex state-of-the-art methods based on prior segmentations of the CACs, which achieve r = 0.94 in ECG-gated pulmonary CT. Conclusions: A convolutional neural network can regress the Agatston score from the image of the heart directly, without a prior segmentation of the CACs. This is a new and simpler paradigm in the Agatston score computation that yields similar results to the state-of-the-art literature.
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John
2017-02-01
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
Fovea detection in optical coherence tomography using convolutional neural networks
NASA Astrophysics Data System (ADS)
Liefers, Bart; Venhuizen, Freerk G.; Theelen, Thomas; Hoyng, Carel; van Ginneken, Bram; Sánchez, Clara I.
2017-02-01
The fovea is an important clinical landmark that is used as a reference for assessing various quantitative measures, such as central retinal thickness or drusen count. In this paper we propose a novel method for automatic detection of the foveal center in Optical Coherence Tomography (OCT) scans. Although the clinician will generally aim to center the OCT scan on the fovea, post-acquisition image processing will give a more accurate estimate of the true location of the foveal center. A Convolutional Neural Network (CNN) was trained on a set of 781 OCT scans that classifies each pixel in the OCT B-scan with a probability of belonging to the fovea. Dilated convolutions were used to obtain a large receptive field, while maintaining pixel-level accuracy. In order to train the network more effectively, negative patches were sampled selectively after each epoch. After CNN classification of the entire OCT volume, the predicted foveal center was chosen as the voxel with maximum output probability, after applying an optimized three-dimensional Gaussian blurring. We evaluate the performance of our method on a data set of 99 OCT scans presenting different stages of Age-related Macular Degeneration (AMD). The fovea was correctly detected in 96:9% of the cases, with a mean distance error of 73 μm(+/-112 μm). This result was comparable to the performance of a second human observer who obtained a mean distance error of 69 μm (+/-94 μm). Experiments showed that the proposed method is accurate and robust even in retinas heavily affected by pathology.
A bandwidth efficient coding scheme for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J., Jr.
1991-01-01
As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor
NASA Astrophysics Data System (ADS)
Tsushima, H.
2017-12-01
For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu
2016-10-01
A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Communications terminal breadboard
NASA Technical Reports Server (NTRS)
1972-01-01
A baseline design is presented of a digital communications link between an advanced manned spacecraft (AMS) and an earth terminal via an Intelsat 4 type communications satellite used as a geosynchronous orbiting relay station. The fabrication, integration, and testing of terminal elements at each end of the link are discussed. In the baseline link design, the information carrying capacity of the link was estimated for both the forward direction (earth terminal to AMS) and the return direction, based upon orbital geometry, relay satellite characteristics, terminal characteristics, and the improvement that can be achieved by the use of convolutional coding/Viterbi decoding techniques.
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
Mission science value-cost savings from the Advanced Imaging Communication System (AICS)
NASA Technical Reports Server (NTRS)
Rice, R. F.
1984-01-01
An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.
Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection
NASA Astrophysics Data System (ADS)
Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos
2017-02-01
We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
Group delay variations of GPS transmitting and receiving antennas
NASA Astrophysics Data System (ADS)
Wanninger, Lambert; Sumaya, Hael; Beer, Susanne
2017-09-01
GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.
Korecki, P; Roszczynialski, T P; Sowa, K M
2015-04-06
In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.
Learning Midlevel Auditory Codes from Natural Sound Statistics.
Młynarski, Wiktor; McDermott, Josh H
2018-03-01
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
A method for exponential propagation of large systems of stiff nonlinear differential equations
NASA Technical Reports Server (NTRS)
Friesner, Richard A.; Tuckerman, Laurette S.; Dornblaser, Bright C.; Russo, Thomas V.
1989-01-01
A new time integrator for large, stiff systems of linear and nonlinear coupled differential equations is described. For linear systems, the method consists of forming a small (5-15-term) Krylov space using the Jacobian of the system and carrying out exact exponential propagation within this space. Nonlinear corrections are incorporated via a convolution integral formalism; the integral is evaluated via approximate Krylov methods as well. Gains in efficiency ranging from factors of 2 to 30 are demonstrated for several test problems as compared to a forward Euler scheme and to the integration package LSODE.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
Handling the satellite inter-frequency biases in triple-frequency observations
NASA Astrophysics Data System (ADS)
Zhao, Lewen; Ye, Shirong; Song, Jia
2017-04-01
The new generation of GNSS satellites, including BDS, Galileo, modernized GPS, and GLONASS, transmit navigation sdata at more frequencies. Multi-frequency signals open new prospects for precise positioning, but satellite code and phase inter-frequency biases (IFB) induced by the third frequency need to be handled. Satellite code IFB can be corrected using products estimated by different strategies, the theoretical and numerical compatibility of these methods need to be proved. Furthermore, a new type of phase IFB, which changes with the relative sun-spacecraft-earth geometry, has been observed. It is necessary to investigate the cause and possible impacts of phase Time-variant IFB (TIFB). Therefore, we present systematic analysis to illustrate the relevancy between satellite clocks and phase TIFB, and compare the handling strategies of the code and phase IFB in triple-frequency positioning. First, the un-differenced L1/L2 satellite clock corrections considering the hardware delays are derived. And IFB induced by the dual-frequency satellite clocks to triple-frequency PPP model is detailed. The analysis shows that estimated satellite clocks actually contain the time-variant phase hardware delays, which can be compensated in L1/L2 ionosphere-free combinations. However, the time-variant hardware delays will lead to TIFB if the third frequency is used. Then, the methods used to correct the code and phase IFB are discussed. Standard point positioning (SPP) and precise point positioning (PPP) using BDS observations are carried out to validate the improvement of different IFB correction strategies. Experiments show that code IFB derived from DCB or geometry-free and ionosphere-free combination show an agreement of 0.3 ns for all satellites. Positioning results and error distribution with two different code IFB correcting strategies achieve similar tendency, which shows their substitutability. The original and wavelet filtered phase TIFB long-term series show significant periodical characteristic for most GEO and IGSO satellites, with the magnitude varies between - 5 cm and 5 cm. Finally, BDS L1/L3 kinematic PPP is conducted with code IFB corrected with DCB combinations, and TIFB corrected with filtered series. Results show that the IFB corrected L1/L3 PPP can achieve comparable convergence and positioning accuracy as L1/L2 combinations in static and kinematic mode.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Huang, Huilin; Weng, Hengyou; Sun, Wenju; Qin, Xi; Shi, Hailing; Wu, Huizhe; Zhao, Boxuan Simen; Mesquita, Ana; Liu, Chang; Yuan, Celvie L; Hu, Yueh-Chiang; Hüttelmaier, Stefan; Skibbe, Jennifer R; Su, Rui; Deng, Xiaolan; Dong, Lei; Sun, Miao; Li, Chenying; Nachtergaele, Sigrid; Wang, Yungui; Hu, Chao; Ferchen, Kyle; Greis, Kenneth D; Jiang, Xi; Wei, Minjie; Qu, Lianghu; Guan, Jun-Lin; He, Chuan; Yang, Jianhua; Chen, Jianjun
2018-06-07
In the version of this Article originally published, the authors incorrectly listed an accession code as GES90642. The correct code is GSE90642 . This has now been amended in all online versions of the Article.
Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks.
Liu, Jiamin; Wang, David; Lu, Le; Wei, Zhuoshi; Kim, Lauren; Turkbey, Evrim B; Sahiner, Berkman; Petrick, Nicholas A; Summers, Ronald M
2017-09-01
Colitis refers to inflammation of the inner lining of the colon that is frequently associated with infection and allergic reactions. In this paper, we propose deep convolutional neural networks methods for lesion-level colitis detection and a support vector machine (SVM) classifier for patient-level colitis diagnosis on routine abdominal CT scans. The recently developed Faster Region-based Convolutional Neural Network (Faster RCNN) is utilized for lesion-level colitis detection. For each 2D slice, rectangular region proposals are generated by region proposal networks (RPN). Then, each region proposal is jointly classified and refined by a softmax classifier and bounding-box regressor. Two convolutional neural networks, eight layers of ZF net and 16 layers of VGG net are compared for colitis detection. Finally, for each patient, the detections on all 2D slices are collected and a SVM classifier is applied to develop a patient-level diagnosis. We trained and evaluated our method with 80 colitis patients and 80 normal cases using 4 × 4-fold cross validation. For lesion-level colitis detection, with ZF net, the mean of average precisions (mAP) were 48.7% and 50.9% for RCNN and Faster RCNN, respectively. The detection system achieved sensitivities of 51.4% and 54.0% at two false positives per patient for RCNN and Faster RCNN, respectively. With VGG net, Faster RCNN increased the mAP to 56.9% and increased the sensitivity to 58.4% at two false positive per patient. For patient-level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively. The difference was not statistically significant with P = 0.18. At the optimal operating point, the RCNN method correctly identified 90.4% (72.3/80) of the colitis patients and 94.0% (75.2/80) of normal cases. The sensitivity improved to 91.6% (73.3/80) and the specificity improved to 95.0% (76.0/80) for the Faster RCNN method. With VGG net, Faster RCNN increased the AUC to 0.986 ± 0.007 and increased the diagnosis sensitivity to 93.7% (75.0/80) and specificity was unchanged at 95.0% (76.0/80). Colitis detection and diagnosis by deep convolutional neural networks is accurate and promising for future clinical application. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Implementation of the WICS Wall Interference Correction System at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Iyer, Venkit; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert
2000-01-01
The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration. The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.
Implementation of the WICS Wall Interference Correction System at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Iyer, Venkit; Martin, Lockheed; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert
2000-01-01
The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration, The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including, tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
TRIAD IV: Nationwide Survey of Medical Students' Understanding of Living Wills and DNR Orders.
Mirarchi, Ferdinando L; Ray, Matthew; Cooney, Timothy
2016-12-01
Living wills are a form of advance directives that help to protect patient autonomy. They are frequently encountered in the conduct of medicine. Because of their impact on care, it is important to understand the adequacy of current medical school training in the preparation of physicians to interpret these directives. Between April and August 2011 of third and fourth year medical students participated in an internet survey involving the interpretation of living wills. The survey presented a standard living will as a "stand-alone," a standard living will with the addition an emergent clinical scenario and then variations of the standard living will that included a code status designation ("DNR," "Full Code," or "Comfort Care"). For each version/ scenario, respondents were asked to assign a code status and choose interventions based on the cases presented. Four hundred twenty-five students from medical schools throughout the country responded. The majority indicated they had received some form of advance directive training and understood the concept of code status and the term "DNR." Based on a stand-alone document, 15% of respondents correctly denoted "full code" as the appropriate code status; adding a clinical scenario yielded negligible improvement. When a code designation was added to the living will, correct code status responses ranged from 68% to 93%, whereas correct treatment decisions ranged from 18% to 78%. Previous training in advance directives had no impact on these results. Our data indicate that the majority of students failed to understand the key elements of a living will; adding a code status designations improved correct responses with the exception of the term DNR. Misunderstanding of advance directives is a nationwide problem and jeopardizes patient safety. Medical School ethics curricula need to be improved to ensure competency with respect to understanding advance directives.
Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung
2016-12-16
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.
Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung
2016-01-01
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods. PMID:27999301
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
Multichannel error correction code decoder
NASA Technical Reports Server (NTRS)
Wagner, Paul K.; Ivancic, William D.
1993-01-01
A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.
Topological order and memory time in marginally-self-correcting quantum memory
NASA Astrophysics Data System (ADS)
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.
Lee, Young Han
2018-04-04
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Analysis Of AVIRIS Data From LEO-15 Using Tafkaa Atmospheric Correction
NASA Technical Reports Server (NTRS)
Montes, Marcos J.; Gao, Bo-Cai; Davis, Curtiss O.; Moline, Mark
2004-01-01
We previously developed an algorithm named Tafkaa for atmospheric correction of remote sensing ocean color data from aircraft and satellite platforms. The algorithm allows quick atmospheric correction of hyperspectral data using lookup tables generated with a modified version of Ahmad & Fraser s vector radiative transfer code. During the past few years we have extended the capabilities of the code. Current modifications include the ability to account for within scene variation in solar geometry (important for very long scenes) and view geometries (important for wide fields of view). Additionally, versions of Tafkaa have been made for a variety of multi-spectral sensors, including SeaWiFS and MODIS. In this proceeding we present some initial results of atmospheric correction of AVIRIS data from the 2001 July Hyperspectral Coastal Ocean Dynamics Experiment (HyCODE) at LEO-15.
Optimal spatial filtering and transfer function for SAR ocean wave spectra
NASA Technical Reports Server (NTRS)
Beal, R. C.; Tilley, D. G.
1981-01-01
The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.
NASA Astrophysics Data System (ADS)
Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W.
2018-01-01
Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.
Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W
2018-01-26
Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Momeni, Ali; Rouhi, Kasra; Rajabalipanah, Hamid; Abdolali, Ali
2018-04-18
Inspired by the information theory, a new concept of re-programmable encrypted graphene-based coding metasurfaces was investigated at terahertz frequencies. A channel-coding function was proposed to convolutionally record an arbitrary information message onto unrecognizable but recoverable parity beams generated by a phase-encrypted coding metasurface. A single graphene-based reflective cell with dual-mode biasing voltages was designed to act as "0" and "1" meta-atoms, providing broadband opposite reflection phases. By exploiting graphene tunability, the proposed scheme enabled an unprecedented degree of freedom in the real-time mapping of information messages onto multiple parity beams which could not be damaged, altered, and reverse-engineered. Various encryption types such as mirroring, anomalous reflection, multi-beam generation, and scattering diffusion can be dynamically attained via our multifunctional metasurface. Besides, contrary to conventional time-consuming and optimization-based methods, this paper convincingly offers a fast, straightforward, and efficient design of diffusion metasurfaces of arbitrarily large size. Rigorous full-wave simulations corroborated the results where the phase-encrypted metasurfaces exhibited a polarization-insensitive reflectivity less than -10 dB over a broadband frequency range from 1 THz to 1.7 THz. This work reveals new opportunities for the extension of re-programmable THz-coding metasurfaces and may be of interest for reflection-type security systems, computational imaging, and camouflage technology.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem
2018-05-23
The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
ERIC Educational Resources Information Center
Rajagopalan, Kanavillil
2006-01-01
The objective of this response article is to think through some of what I see as the far-reaching implications of a recent paper by Eric Hauser (2005) entitled "Coding 'corrective recasts': the maintenance of meaning and more fundamental problems". Hauser makes a compelling, empirically-backed case for his contention that, contrary to widespread…
77 FR 12764 - POSTNET Barcode Discontinuation
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... routing code appears in the lower right corner. * * * * * [Delete current 5.6, DPBC Numeric Equivalent, in... correct ZIP Code, ZIP+4 code, or numeric equivalent to the delivery point routing code and which meets... equivalent to the delivery point routing code is formed by [[Page 12766
Ancient DNA sequence revealed by error-correcting codes.
Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo
2015-07-10
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.
Ancient DNA sequence revealed by error-correcting codes
Brandão, Marcelo M.; Spoladore, Larissa; Faria, Luzinete C. B.; Rocha, Andréa S. L.; Silva-Filho, Marcio C.; Palazzo, Reginaldo
2015-01-01
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code. PMID:26159228
NASA Astrophysics Data System (ADS)
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2017-01-01
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497
Inoue, Kentaro; Ito, Hiroshi; Goto, Ryoi; Nakagawa, Manabu; Kinomura, Shigeo; Sato, Tachio; Sato, Kazunori; Fukuda, Hiroshi
2005-06-01
Several studies using single photon emission tomography (SPECT) have shown changes in cerebral blood flow (CBF) with age, which were associated with partial volume effects by some authors. Some studies have also demonstrated gender-related differences in CBF. The present study aimed to examine age and gender effects on CBF SPECT images obtained using the 99mTc-ethyl cysteinate dimer and a SPECT scanner, before and after partial volume correction (PVC) using magnetic resonance (MR) imaging. Forty-four healthy subjects (29 males and 15 females; age range, 27-64 y; mean age, 50.0 +/- 9.8 y) participated. Each MR image was segmented to yield grey and white matter images and coregistered to a corresponding SPECT image, followed by convolution to approximate the SPECT spatial resolution. PVC-SPECT images were produced using the convoluted grey matter MR (GM-MR) and white matter MR images. The age and gender effects were assessed using SPM99. Decreases with age were detected in the anterolateral prefrontal cortex and in areas along the lateral sulcus and the lateral ventricle, bilaterally, in the GM-MR images and the SPECT images. In the PVC-SPECT images, decreases in CBF in the lateral prefrontal cortex lost their statistical significance. Decreases in CBF with age found along the lateral sulcus and the lateral ventricle, on the other hand, remained statistically significant, but observation of the spatially normalized MR images suggests that these findings are associated with the dilatation of the lateral sulcus and lateral ventricle, which was not completely compensated for by the spatial normalization procedure. Our present study demonstrated that age effects on CBF in healthy subjects could reflect morphological differences with age in grey matter.
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Classification of galaxy type from images using Microsoft R Server
NASA Astrophysics Data System (ADS)
de Vries, Andrie
2017-06-01
Many astronomers working in the field of AstroInformatics write code as part of their work. Although the programming language of choice is Python, a small number (8%) use R. R has its specific strengths in the domain of statistics, and is often viewed as limited in the size of data it can handle. However, Microsoft R Server is a product that removes these limitations by being able to process much larger amounts of data. I present some highlights of R Server, by illustrating how to fit a convolutional neural network using R. The specific task is to classify galaxies, using only images extracted from the Sloan Digital Skyserver.
A VLSI pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.
1986-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
Capacity of noncoherent MFSK channels
NASA Technical Reports Server (NTRS)
Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.
1974-01-01
Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
75 FR 39472 - Airworthiness Directives; Eclipse Aerospace, Inc. Model EA500 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-09
..., altitude preselect, and/or transponder codes. We are proposing this AD to correct faulty integration of... determined to be a software communication integration issue between the EFIS display interface and associated... transponder codes. We are issuing this AD to correct faulty integration of hardware and software, which could...
76 FR 7864 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
...: Psychiatric Unit Criteria Work Sheet and Supporting Regulations 412.25 and 412.27; Use: A limited number of... National Correct Coding Initiative in Medicaid, SMD Letter 10-017 dated September 1, 2010. Use; The Patient... State Use of National Correct Coding Initiative. A State Medicaid Director letter, 10-017 dated...
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Newman, Perry A.
1991-01-01
A nonlinear, four wall, post-test wall interference assessment/correction (WIAC) code was developed for transonic airfoil data from solid wall wind tunnels with flexibly adaptable top and bottom walls. The WIAC code was applied over a broad range of test conditions to four sets of NACA 0012 airfoil data, from two different adaptive wall wind tunnels. The data include many test points for fully adapted walls, as well as numerous partially adapted and unadapted test points, which together represent many different model/tunnel configurations and possible wall interference effects. Small corrections to the measured Mach numbers and angles of attack were obtained from the WIAC code even for fully adapted data; these corrections generally improve the correlation among the various sets of airfoil data and simultaneously improve the correlation of the data with calculations for a 2-D, free air, Navier-Stokes code. The WIAC corrections for airfoil data taken in fully adapted wall test sections are shown to be significantly smaller than those for comparable airfoil data from straight, slotted wall test sections. This indicates, as expected, a lesser degree of wall interference in the adapted wall tunnels relative to the slotted wall tunnels. Application of the WIAC code to this data was, however, somewhat more difficult and time consuming than initially expected from similar previous experience with WIAC applications to slotted wall data.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Wall interference assessment and corrections
NASA Technical Reports Server (NTRS)
Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.
1989-01-01
Wind tunnel wall interference assessment and correction (WIAC) concepts, applications, and typical results are discussed in terms of several nonlinear transonic codes and one panel method code developed for and being implemented at NASA-Langley. Contrasts between 2-D and 3-D transonic testing factors which affect WIAC procedures are illustrated using airfoil data from the 0.3 m Transonic Cryogenic Tunnel and Pathfinder 1 data from the National Transonic Facility. Initial results from the 3-D WIAC codes are encouraging; research on and implementation of WIAC concepts continue.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence
Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos
2016-01-01
When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865
Feasibility of self-correcting quantum memory and thermal stability of topological order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Beni, E-mail: rouge@mit.edu
2011-10-15
Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
NASA Astrophysics Data System (ADS)
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
ERIC Educational Resources Information Center
Miellet, Sebastien; Sparrow, Laurent
2004-01-01
This experiment employed the boundary paradigm during sentence reading to explore the nature of early phonological coding in reading. Fixation durations were shorter when the parafoveal preview was the correct word than when it was a spelling control pseudoword. In contrast, there was no significant difference between correct word and…
77 FR 72913 - Defining Larger Participants of the Consumer Debt Collection Market; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-07
...;to and codified in the Code of Federal Regulations, which is published #0;under 50 titles pursuant to 44 U.S.C. 1510. #0; #0;The Code of Federal Regulations is sold by the Superintendent of Documents. #0... collection. The final rule contained four typographical errors, which this document corrects. Three of these...
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.