Sample records for reduced ml-decoding complexity

  1. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  2. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  3. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  4. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  5. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  6. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  7. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  8. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  9. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    NASA Astrophysics Data System (ADS)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  10. A low-complexity Reed-Solomon decoder using new key equation solver

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Yuan, Songxin; Tu, Xiaodong; Zhang, Chongfu

    2006-09-01

    This paper presents a low-complexity parallel Reed-Solomon (RS) (255,239) decoder architecture using a novel pipelined variable stages recursive Modified Euclidean (ME) algorithm for optical communication. The pipelined four-parallel syndrome generator is proposed. The time multiplexing and resource sharing schemes are used in the novel recursive ME algorithm to reduce the logic gate count. The new key equation solver can be shared by two decoder macro. A new Chien search cell which doesn't need initialization is proposed in the paper. The proposed decoder can be used for 2.5Gb/s data rates device. The decoder is implemented in Altera' Stratixll device. The resource utilization is reduced about 40% comparing to the conventional method.

  11. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  12. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  13. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  14. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  15. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  16. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  17. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  18. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  19. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  20. Multi-stage decoding of multi-level modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  1. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  2. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  3. PSEUDO-CODEWORD LANDSCAPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHERTKOV, MICHAEL; STEPANOV, MIKHAIL

    2007-01-10

    The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less

  4. Joint carrier phase and frequency-offset estimation with parallel implementation for dual-polarization coherent receiver.

    PubMed

    Lu, Jianing; Li, Xiang; Fu, Songnian; Luo, Ming; Xiang, Meng; Zhou, Huibin; Tang, Ming; Liu, Deming

    2017-03-06

    We present dual-polarization complex-weighted, decision-aided, maximum-likelihood algorithm with superscalar parallelization (SSP-DP-CW-DA-ML) for joint carrier phase and frequency-offset estimation (FOE) in coherent optical receivers. By pre-compensation of the phase offset between signals in dual polarizations, the performance can be substantially improved. Meanwhile, with the help of modified SSP-based parallel implementation, the acquisition time of FO and the required number of training symbols are reduced by transferring the complex weights of the filters between adjacent buffers, where differential coding/decoding is not required. Simulation results show that the laser linewidth tolerance of our proposed algorithm is comparable to traditional blind phase search (BPS), while a complete FOE range of ± symbol rate/2 can be achieved. Finally, performance of our proposed algorithm is experimentally verified under the scenario of back-to-back (B2B) transmission using 10 Gbaud DP-16/32-QAM formats.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  6. A high data rate universal lattice decoder on FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Huang, Xinming; Kura, Swapna

    2005-06-01

    This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.

  7. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, H.M.; Reed, I.S.

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less

  8. On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Xu, Lei; Wang, Ting

    2010-12-01

    Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.

  9. From classic motor imagery to complex movement intention decoding: The noninvasive Graz-BCI approach.

    PubMed

    Müller-Putz, G R; Schwarz, A; Pereira, J; Ofner, P

    2016-01-01

    In this chapter, we give an overview of the Graz-BCI research, from the classic motor imagery detection to complex movement intentions decoding. We start by describing the classic motor imagery approach, its application in tetraplegic end users, and the significant improvements achieved using coadaptive brain-computer interfaces (BCIs). These strategies have the drawback of not mirroring the way one plans a movement. To achieve a more natural control-and to reduce the training time-the movements decoded by the BCI need to be closely related to the user's intention. Within this natural control, we focus on the kinematic level, where movement direction and hand position or velocity can be decoded from noninvasive recordings. First, we review movement execution decoding studies, where we describe the decoding algorithms, their performance, and associated features. Second, we describe the major findings in movement imagination decoding, where we emphasize the importance of estimating the sources of the discriminative features. Third, we introduce movement target decoding, which could allow the determination of the target without knowing the exact movement-by-movement details. Aside from the kinematic level, we also address the goal level, which contains relevant information on the upcoming action. Focusing on hand-object interaction and action context dependency, we discuss the possible impact of some recent neurophysiological findings in the future of BCI control. Ideally, the goal and the kinematic decoding would allow an appropriate matching of the BCI to the end users' needs, overcoming the limitations of the classic motor imagery approach. © 2016 Elsevier B.V. All rights reserved.

  10. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  11. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Deutsch, L. J.; Reed, I. S.

    1987-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  12. On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays

    NASA Technical Reports Server (NTRS)

    Shao, Howard M.; Reed, Irving S.

    1988-01-01

    A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.

  13. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  14. Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep

    2011-05-01

    This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.

  15. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  16. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  17. Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks

    NASA Astrophysics Data System (ADS)

    Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.

    2016-12-01

    In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.

  18. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  19. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  20. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  1. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  2. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  3. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-05-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  4. Preserved Affective Sharing But Impaired Decoding of Contextual Complex Emotions in Alcohol Dependence.

    PubMed

    Grynberg, Delphine; Maurage, Pierre; Nandrino, Jean-Louis

    2017-04-01

    Prior research has repeatedly shown that alcohol dependence is associated with a large range of impairments in psychological processes, which could lead to interpersonal deficits. Specifically, it has been suggested that these interpersonal difficulties are underpinned by reduced recognition and sharing of others' emotional states. However, this pattern of deficits remains to be clarified. This study thus aimed to investigate whether alcohol dependence is associated with impaired abilities in decoding contextual complex emotions and with altered sharing of others' emotions. Forty-one alcohol-dependent individuals (ADI) and 37 matched healthy individuals completed the Multifaceted Empathy Test, in which they were instructed to identify complex emotional states expressed by individuals in contextual scenes and to state to what extent they shared them. Compared to healthy individuals, ADI were impaired in identifying negative (Cohen's d = 0.75) and positive (Cohen's d = 0.46) emotional states but, conversely, presented preserved abilities in sharing others' emotional states. This study shows that alcohol dependence is characterized by an impaired ability to decode complex emotional states (both positive and negative), despite the presence of complementary contextual cues, but by preserved emotion-sharing. Therefore, these results extend earlier data describing an impaired ability to decode noncontextualized emotions toward contextualized and ecologically valid emotional states. They also indicate that some essential emotional competences such as emotion-sharing are preserved in alcohol dependence, thereby offering potential therapeutic levers. Copyright © 2017 by the Research Society on Alcoholism.

  5. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  6. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  7. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  8. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  9. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    PubMed

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  10. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  11. Hardware Implementation of Serially Concatenated PPM Decoder

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.

  12. Hybrid and concatenated coding applications.

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Odenwalder, J. P.

    1972-01-01

    Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

  13. High-Speed Soft-Decision Decoding of Two Reed-Muller Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Uehara, Gregory T.

    1996-01-01

    In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RNI subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a hi ch data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.

  14. High-Speed Soft-Decision Decoding of Two Reed-Muller Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Uehara, Gregory T.

    1996-01-01

    In this research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing, a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study, which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating, and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.

  15. Architecture and implementation considerations of a high-speed Viterbi decoder for a Reed-Muller subcode

    NASA Technical Reports Server (NTRS)

    Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.

    1996-01-01

    The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.

  16. Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems

    PubMed Central

    Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.

    2011-01-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582

  17. Investigation of Different Constituent Encoders in a Turbo-code Scheme for Reduced Decoder Complexity

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.

    1998-01-01

    A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.

  18. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  19. A four-dimensional virtual hand brain-machine interface using active dimension selection

    NASA Astrophysics Data System (ADS)

    Rouse, Adam G.

    2016-06-01

    Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  20. A four-dimensional virtual hand brain-machine interface using active dimension selection

    PubMed Central

    Rouse, Adam G.

    2018-01-01

    Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896

  1. Self-configurable radio receiver system and method for use with signals without prior knowledge of signal defining characteristics

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor); Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Tkacenko, Andre (Inventor)

    2013-01-01

    A method, radio receiver, and system to autonomously receive and decode a plurality of signals having a variety of signal types without a priori knowledge of the defining characteristics of the signals is disclosed. The radio receiver is capable of receiving a signal of an unknown signal type and, by estimating one or more defining characteristics of the signal, determine the type of signal. The estimated defining characteristic(s) is/are utilized to enable the receiver to determine other defining characteristics. This in turn, enables the receiver, through multiple iterations, to make a maximum-likelihood (ML) estimate for each of the defining characteristics. After the type of signal is determined by its defining characteristics, the receiver selects an appropriate decoder from a plurality of decoders to decode the signal.

  2. Viterbi decoding for satellite and space communication.

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Jacobs, I. M.

    1971-01-01

    Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

  3. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface

    NASA Astrophysics Data System (ADS)

    Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.

    2016-02-01

    Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.

  4. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface.

    PubMed

    Sachs, Nicholas A; Ruiz-Torres, Ricardo; Perreault, Eric J; Miller, Lee E

    2016-02-01

    It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.

  5. Circuit Design Approaches for Implementation of a Subtrellis IC for a Reed-Muller Subcode

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Uehara, Gregory T.; Nakamura, Eric B.; Chu, Cecilia W. P.

    1996-01-01

    In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second(Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high- speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these subtrellises.

  6. Circuit Design Approaches for Implementation of a Subtrellis IC for a Reed-Muller Subcode

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Uehara, Gregory T.; Nakamura, Eric B.; Chu, Cecilia W. P.

    1996-01-01

    In this research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these subtrellises.

  7. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  8. Iterative Demodulation and Decoding of Non-Square QAM

    NASA Technical Reports Server (NTRS)

    Li, Lifang; Divsalar, Dariush; Dolinar, Samuel

    2004-01-01

    It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.

  9. MIMO transmit scheme based on morphological perceptron with competitive learning.

    PubMed

    Valente, Raul Ambrozio; Abrão, Taufik

    2016-08-01

    This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Buffer management for sequential decoding. [block erasure probability reduction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.

  11. Simplified microprocessor design for VLSI control applications

    NASA Technical Reports Server (NTRS)

    Cameron, K.

    1991-01-01

    A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.

  12. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  13. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  14. Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices.

    PubMed

    Schaffelhofer, Stefan; Agudelo-Toro, Andres; Scherberger, Hansjörg

    2015-01-21

    Despite recent advances in decoding cortical activity for motor control, the development of hand prosthetics remains a major challenge. To reduce the complexity of such applications, higher cortical areas that also represent motor plans rather than just the individual movements might be advantageous. We investigated the decoding of many grip types using spiking activity from the anterior intraparietal (AIP), ventral premotor (F5), and primary motor (M1) cortices. Two rhesus monkeys were trained to grasp 50 objects in a delayed task while hand kinematics and spiking activity from six implanted electrode arrays (total of 192 electrodes) were recorded. Offline, we determined 20 grip types from the kinematic data and decoded these hand configurations and the grasped objects with a simple Bayesian classifier. When decoding from AIP, F5, and M1 combined, the mean accuracy was 50% (using planning activity) and 62% (during motor execution) for predicting the 50 objects (chance level, 2%) and substantially larger when predicting the 20 grip types (planning, 74%; execution, 86%; chance level, 5%). When decoding from individual arrays, objects and grip types could be predicted well during movement planning from AIP (medial array) and F5 (lateral array), whereas M1 predictions were poor. In contrast, predictions during movement execution were best from M1, whereas F5 performed only slightly worse. These results demonstrate for the first time that a large number of grip types can be decoded from higher cortical areas during movement preparation and execution, which could be relevant for future neuroprosthetic devices that decode motor plans. Copyright © 2015 the authors 0270-6474/15/351068-14$15.00/0.

  15. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  16. The Differential Contributions of Auditory-Verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    ERIC Educational Resources Information Center

    Squires, Katie Ellen

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…

  17. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  18. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  20. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  1. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  2. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  3. Application of source biasing technique for energy efficient DECODER circuit design: memory array application

    NASA Astrophysics Data System (ADS)

    Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav

    2018-04-01

    Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.

  4. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  5. Extracting duration information in a picture category decoding task using hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  6. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  7. A power-efficient communication system between brain-implantable devices and external computers.

    PubMed

    Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui

    2007-01-01

    In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.

  8. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  9. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  10. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  11. Optimal decoding in fading channels - A combined envelope, multiple differential and coherent detection approach

    NASA Astrophysics Data System (ADS)

    Makrakis, Dimitrios; Mathiopoulos, P. Takis

    A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.

  12. Word and Person Effects on Decoding Accuracy: A New Look at an Old Question

    PubMed Central

    Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.

    2011-01-01

    The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was comprised of 196 English-speaking grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure. Because grapheme-phoneme correspondence (GPC) knowledge was treated as person and word specific, we are able to conclude that it is neither necessary nor sufficient for a student to know all GPCs in a word before accurately decoding the word. And controlling for word-specific GPC knowledge, students with lower phonemic awareness and slower rapid naming skill have lower predicted probabilities of correct decoding than counterparts with superior skills. By assessing a person-by-word interaction, we found that students with lower phonemic awareness have more difficulty applying knowledge of complex vowel graphemes compared to complex consonant graphemes when decoding unfamiliar words. Implications of the methodology and results are discussed in light of future research. PMID:21743750

  13. Decoding small surface codes with feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  14. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  15. Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities

    NASA Astrophysics Data System (ADS)

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.

  16. Activity of Tobramycin against Cystic Fibrosis Isolates of Burkholderia cepacia Complex Grown as Biofilms.

    PubMed

    Kennedy, Sarah; Beaudoin, Trevor; Yau, Yvonne C W; Caraher, Emma; Zlosnik, James E A; Speert, David P; LiPuma, John J; Tullis, Elizabeth; Waters, Valerie

    2016-01-01

    Pulmonary infection with Burkholderia cepacia complex in cystic fibrosis (CF) patients is associated with more-rapid lung function decline and earlier death than in CF patients without this infection. In this study, we used confocal microscopy to visualize the effects of various concentrations of tobramycin, achievable with systemic and aerosolized drug administration, on mature B. cepacia complex biofilms, both in the presence and absence of CF sputum. After 24 h of growth, biofilm thickness was significantly reduced by exposure to 2,000 μg/ml of tobramycin for Burkholderia cepacia, Burkholderia multivorans, and Burkholderia vietnamiensis; 200 μg/ml of tobramycin was sufficient to reduce the thickness of Burkholderia dolosa biofilm. With a more mature 48-h biofilm, significant reductions in thickness were seen with tobramycin at concentrations of ≥100 μg/ml for all Burkholderia species. In addition, an increased ratio of dead to live cells was observed in comparison to control with tobramycin concentrations of ≥200 μg/ml for B. cepacia and B. dolosa (24 h) and ≥100 μg/ml for Burkholderia cenocepacia and B. dolosa (48 h). Although sputum significantly increased biofilm thickness, tobramycin concentrations of 1,000 μg/ml were still able to significantly reduce biofilm thickness of all B. cepacia complex species with the exception of B. vietnamiensis. In the presence of sputum, 1,000 μg/ml of tobramycin significantly increased the dead-to-live ratio only for B. multivorans compared to control. In summary, although killing is attenuated, high-dose tobramycin can effectively decrease the thickness of B. cepacia complex biofilms, even in the presence of sputum, suggesting a possible role as a suppressive therapy in CF. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  17. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  18. A high speed sequential decoder

    NASA Technical Reports Server (NTRS)

    Lum, H., Jr.

    1972-01-01

    The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.

  19. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  20. "ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SANTHI, NANDAKISHORE

    We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less

  1. Tail Biting Trellis Representation of Codes: Decoding and Construction

    NASA Technical Reports Server (NTRS)

    Shao. Rose Y.; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.

  2. Decoding DNA labels by melting curve analysis using real-time PCR.

    PubMed

    Balog, József A; Fehér, Liliána Z; Puskás, László G

    2017-12-01

    Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.

  3. The ribosome as an optimal decoder: a lesson in molecular recognition.

    PubMed

    Savir, Yonatan; Tlusty, Tsvi

    2013-04-11

    The ribosome is a complex molecular machine that, in order to synthesize proteins, has to decode mRNAs by pairing their codons with matching tRNAs. Decoding is a major determinant of fitness and requires accurate and fast selection of correct tRNAs among many similar competitors. However, it is unclear whether the modern ribosome, and in particular its large conformational changes during decoding, are the outcome of adaptation to its task as a decoder or the result of other constraints. Here, we derive the energy landscape that provides optimal discrimination between competing substrates and thereby optimal tRNA decoding. We show that the measured landscape of the prokaryotic ribosome is sculpted in this way. This model suggests that conformational changes of the ribosome and tRNA during decoding are means to obtain an optimal decoder. Our analysis puts forward a generic mechanism that may be utilized broadly by molecular recognition systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  5. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  6. A Randomized Controlled Trial of Low-Dose Tranexamic Acid versus Placebo to Reduce Red Blood Cell Transfusion During Complex Multilevel Spine Fusion Surgery.

    PubMed

    Carabini, Louanne M; Moreland, Natalie C; Vealey, Ryan J; Bebawy, John F; Koski, Tyler R; Koht, Antoun; Gupta, Dhanesh K; Avram, Michael J

    2018-02-01

    Multilevel spine fusion surgery for adult deformity correction is associated with significant blood loss and coagulopathy. Tranexamic acid reduces blood loss in high-risk surgery, but the efficacy of a low-dose regimen is unknown. Sixty-one patients undergoing multilevel complex spinal fusion with and without osteotomies were randomly assigned to receive low-dose tranexamic acid (10 mg/kg loading dose, then 1 mg·kg -1 ·hr -1 throughout surgery) or placebo. The primary outcome was the total volume of red blood cells transfused intraoperatively. Thirty-one patients received tranexamic acid, and 30 patients received placebo. Patient demographics, risk of major transfusion, preoperative hemoglobin, and surgical risk of the 2 groups were similar. There was a significant decrease in total volume of red blood cells transfused (placebo group median 1460 mL vs. tranexamic acid group 1140 mL; median difference 463 mL, 95% confidence interval 15 to 914 mL, P = 0.034), with a decrease in cell saver transfusion (placebo group median 490 mL vs. tranexamic acid group 256 mL; median difference 166 mL, 95% confidence interval 0 to 368 mL, P = 0.042). The decrease in packed red blood cell transfusion did not reach statistical significance (placebo group median 1050 mL vs. tranexamic acid group 600 mL; median difference 300 mL, 95% confidence interval 0 to 600 mL, P = 0.097). Our results support the use of low-dose tranexamic acid during complex multilevel spine fusion surgery to decrease total red blood cell transfusion. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  8. Real-time SHVC software decoding with multi-threaded parallel processing

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  9. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  10. Nonlinear decoding of a complex movie from the mammalian retina

    PubMed Central

    Deny, Stéphane; Martius, Georg

    2018-01-01

    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463

  11. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  12. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  13. Assessing Decoding Ability: The Role of Speed and Accuracy and a New Composite Indicator to Measure Decoding Skill in Elementary Grades

    ERIC Educational Resources Information Center

    Morlini, Isabella; Stella, Giacomo; Scorza, Maristella

    2015-01-01

    Tools for assessing decoding skill in students attending elementary grades are of fundamental importance for guaranteeing an early identification of reading disabled students and reducing both the primary negative effects (on learning) and the secondary negative effects (on the development of the personality) of this disability. This article…

  14. Decoding Ca2+ signals in plants

    NASA Technical Reports Server (NTRS)

    Sathyanarayanan, P. V.; Poovaiah, B. W.

    2004-01-01

    Different input signals create their own characteristic Ca2+ fingerprints. These fingerprints are distinguished by frequency, amplitude, duration, and number of Ca2+ oscillations. Ca(2+)-binding proteins and protein kinases decode these complex Ca2+ fingerprints through conformational coupling and covalent modifications of proteins. This decoding of signals can lead to a physiological response with or without changes in gene expression. In plants, Ca(2+)-dependent protein kinases and Ca2+/calmodulin-dependent protein kinases are involved in decoding Ca2+ signals into phosphorylation signals. This review summarizes the elements of conformational coupling and molecular mechanisms of regulation of the two groups of protein kinases by Ca2+ and Ca2+/calmodulin in plants.

  15. An Optimized Three-Level Design of Decoder Based on Nanoscale Quantum-Dot Cellular Automata

    NASA Astrophysics Data System (ADS)

    Seyedi, Saeid; Navimipour, Nima Jafari

    2018-03-01

    Quantum-dot Cellular Automata (QCA) has been potentially considered as a supersede to Complementary Metal-Oxide-Semiconductor (CMOS) because of its inherent advantages. Many QCA-based logic circuits with smaller feature size, improved operating frequency, and lower power consumption than CMOS have been offered. This technology works based on electron relations inside quantum-dots. Due to the importance of designing an optimized decoder in any digital circuit, in this paper, we design, implement and simulate a new 2-to-4 decoder based on QCA with low delay, area, and complexity. The logic functionality of the 2-to-4 decoder is verified using the QCADesigner tool. The results have shown that the proposed QCA-based decoder has high performance in terms of a number of cells, covered area, and time delay. Due to the lower clock pulse frequency, the proposed 2-to-4 decoder is helpful for building QCA-based sequential digital circuits with high performance.

  16. A Novel Nonparametric Approach for Neural Encoding and Decoding Models of Multimodal Receptive Fields.

    PubMed

    Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V

    2016-07-01

    Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.

  17. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  18. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    NASA Astrophysics Data System (ADS)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  19. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.

    PubMed

    Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A

    2014-12-01

    To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  20. Universal Decoder for PPM of any Order

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.

    2010-01-01

    A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.

  1. The Influence of Formulaic Language on L2 Listener Decoding in Extended Discourse

    ERIC Educational Resources Information Center

    Yeldham, Michael

    2018-01-01

    This study investigated the effect of formulaic language on L2 learners' ability to decode words in listening texts. One possibility was that formulas would facilitate listening by reducing the need to process every word of the sequences. However, a contrasting possibility was that the commonly reduced nature of formulaic words would hinder…

  2. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  3. Highly efficient simulation environment for HDTV video decoder in VLSI design

    NASA Astrophysics Data System (ADS)

    Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter

    2002-01-01

    With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.

  4. Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory.

    PubMed

    Ng, Tse Nga; Schwartz, David E; Lavery, Leah L; Whiting, Gregory L; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer

    2012-01-01

    Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic.

  5. Markov source model for printed music decoding

    NASA Astrophysics Data System (ADS)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  6. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  7. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  8. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  9. Low Cost SoC Design of H.264/AVC Decoder for Handheld Video Player

    NASA Astrophysics Data System (ADS)

    Wisayataksin, Sumek; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki

    We propose a low cost and stand-alone platform-based SoC for H.264/AVC decoder, whose target is practical mobile applications such as a handheld video player. Both low cost and stand-alone solutions are particularly emphasized. The SoC, consisting of RISC core and decoder core, has advantages in terms of flexibility, testability and various I/O interfaces. For decoder core design, the proposed H.264/AVC coprocessor in the SoC employs a new block pipelining scheme instead of a conventional macroblock or a hybrid one, which greatly contribute to reducing drastically the size of the core and its pipelining buffer. In addition, the decoder schedule is optimized to block level which is easy to be programmed. Actually, the core size is reduced to 138 KGate with 3.5 kbyte memory. In our practical development, a single external SDRAM is sufficient for both reference frame buffer and display buffer. Various peripheral interfaces such as a compact flash, a digital broadcast receiver and a LCD driver are also provided on a chip.

  10. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  11. Simultaneous real-time monitoring of multiple cortical systems.

    PubMed

    Gupta, Disha; Jeremy Hill, N; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L; Schalk, Gerwin

    2014-10-01

    Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic.

  12. Simultaneous Real-Time Monitoring of Multiple Cortical Systems

    PubMed Central

    Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin

    2014-01-01

    Objective Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor, or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach We study these questions using electrocorticographic (ECoG) signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (6 for offline parameter optimization, 6 for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelope. These decoders were trained separately and executed simultaneously in real time. Significance This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic. PMID:25080161

  13. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  14. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  15. Defining the Role of Autophagy Kinase ULK1 Signaling in Therapeutic Response of Tuberous Sclerosis Complex to mTOR Inhibitors

    DTIC Science & Technology

    2015-04-01

    recently decoded a major conserved route that mTORC1 uses to control autophagy. These studies demonstrate that mTORC1 inactivates another kinase complex...inhibition, and 2) to further explore use of novel small molecule inhibitors of ULK1 to synergize with mTOR inhibitors to induce cell death. 15. SUBJECT...others have recently decoded a major conserved route that mTORC1 uses to control autophagy. These studies demonstrate that mTORC1 inactivates another

  16. On accuracy, privacy, and complexity in the identification problem

    NASA Astrophysics Data System (ADS)

    Beekhof, F.; Voloshynovskiy, S.; Koval, O.; Holotyak, T.

    2010-02-01

    This paper presents recent advances in the identification problem taking into account the accuracy, complexity and privacy leak of different decoding algorithms. Using a model of different actors from literature, we show that it is possible to use more accurate decoding algorithms using reliability information without increasing the privacy leak relative to algorithms that only use binary information. Existing algorithms from literature have been modified to take advantage of reliability information, and we show that a proposed branch-and-bound algorithm can outperform existing work, including the enhanced variants.

  17. A 1-1/2-level on-chip-decoding bubble memory chip design

    NASA Technical Reports Server (NTRS)

    Chen, T. T.

    1975-01-01

    Design includes multi-channel replicator which can reduce chip-writing requirement, selective annihilating switch which can effectively annihilate bubbles with minimum delay, and modified transfer switch which can be used as selective steering-type decoder.

  18. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    NASA Astrophysics Data System (ADS)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  19. Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory

    PubMed Central

    Ng, Tse Nga; Schwartz, David E.; Lavery, Leah L.; Whiting, Gregory L.; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer

    2012-01-01

    Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic. PMID:22900143

  20. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  1. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  2. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  3. Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.

    PubMed

    Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo

    2017-05-01

    In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  5. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    PubMed

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  7. Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond

    PubMed Central

    Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong

    2016-01-01

    In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment. PMID:27898712

  8. Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond.

    PubMed

    Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong

    2016-01-01

    In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment.

  9. Concatenated coding for low date rate space communications.

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1972-01-01

    In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.

  10. Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.

    PubMed

    Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming

    2017-02-01

    Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.

  11. The decoding of majority-multiplexed signals by means of dyadic convolution

    NASA Astrophysics Data System (ADS)

    Losev, V. V.

    1980-09-01

    The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.

  12. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  13. Influence of incident angle on the decoding in laser polarization encoding guidance

    NASA Astrophysics Data System (ADS)

    Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan

    2009-07-01

    Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.

  14. A novel design of optical CDMA system based on TCM and FFH

    NASA Astrophysics Data System (ADS)

    Fang, Jun-Bin; Xu, Zhi-Hai; Huang, Hong-bin; Zheng, Liming; Chen, Shun-er; Liu, Wei-ping

    2005-02-01

    For the application in Passive Optical Network (PON), a novel design of OCDMA system scheme is proposed in this paper. There are two key components included in this scheme: a new kind of OCDMA encoder/decoder system based on TCM and FFH and an improved Optical Line Terminal (OLT) receiving system with improved anti-interference performance by the use of Long Period Fiber Grating (LPFG). In the encoder/decoder system, Trellis Coded Modulation (TCM) encoder is applied in front of the FFH modulator. Original signal firstly is encoded through TCM encoder, and then the redundant code out of the TCM encoder will be mapped into one of the FFH modulation signal subsets for transmission. On the receiver (decoder) side, transmitting signal is demodulated through FFH and decoded by trellis decoder. Owing to the fact that high coding gain can be acquired by TCM without adding transmitting band and reducing transmitting speed, TCM is utilized to ameliorate bit error performance and reduce multi-user interference. In the OLT receiving system, EDFA and LPFG are placed in front of decoder to get excellent gain flatness on a large bandwidth, and Optical Hard Limiter (OHL) is also deployed to improve detection performance, through which the anti-interference performance of receiving system can be greatly enhanced. At the same time, some software is used to simulate the system performance for further analysis and authentication. The related work in this paper provides a valuable reference to the research.

  15. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  16. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  17. Ensemble cryo-EM elucidates the mechanism of translation fidelity

    PubMed Central

    Loveland, Anna B.; Demo, Gabriel; Grigorieff, Nikolaus; Korostelev, Andrei A.

    2017-01-01

    SUMMARY Faithful gene translation depends on accurate decoding, whose structural mechanism remains a matter of debate. Ribosomes decode mRNA codons by selecting cognate aminoacyl-tRNAs delivered by EF-Tu. We present high-resolution structural ensembles of ribosomes with cognate or near-cognate aminoacyl-tRNAs delivered by EF-Tu. Both cognate and near-cognate tRNA anticodons explore the A site of an open 30S subunit, while inactive EF-Tu is separated from the 50S subunit. A transient conformation of decoding-center nucleotide G530 stabilizes the cognate codon-anticodon helix, initiating step-wise “latching” of the decoding center. The resulting 30S domain closure docks EF-Tu at the sarcin-ricin loop of the 50S subunit, activating EF-Tu for GTP hydrolysis and ensuing aminoacyl-tRNA accommodation. By contrast, near-cognate complexes fail to induce the G530 latch, thus favoring open 30S pre-accommodation intermediates with inactive EF-Tu. This work unveils long-sought structural differences between the pre-accommodation of cognate and near-cognate tRNA that elucidate the mechanism of accurate decoding. PMID:28538735

  18. High performance MPEG-audio decoder IC

    NASA Technical Reports Server (NTRS)

    Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.

    1993-01-01

    The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.

  19. Linear methods for reducing EMG contamination in peripheral nerve motor decodes.

    PubMed

    Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J

    2016-08-01

    Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.

  20. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  1. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  2. Optimizations of a Hardware Decoder for Deep-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon

    2007-01-01

    The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  3. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  5. Spatial attention and reading ability: ERP correlates of flanker and cue-size effects in good and poor adult phonological decoders.

    PubMed

    Matthews, Allison Jane; Martin, Frances Heritage

    2015-12-01

    To investigate facilitatory and inhibitory processes during selective attention among adults with good (n=17) and poor (n=14) phonological decoding skills, a go/nogo flanker task was completed while EEG was recorded. Participants responded to a middle target letter flanked by compatible or incompatible flankers. The target was surrounded by a small or large circular cue which was presented simultaneously or 500ms prior. Poor decoders showed a greater RT cost for incompatible stimuli preceded by large cues and less RT benefit for compatible stimuli. Poor decoders also showed reduced modulation of ERPs by cue-size at left hemisphere posterior sites (N1) and by flanker compatibility at right hemisphere posterior sites (N1) and frontal sites (N2), consistent with processing differences in fronto-parietal attention networks. These findings have potential implications for understanding the relationship between spatial attention and phonological decoding in dyslexia. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  7. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  8. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  9. Decoding of motor intentions from epidural ECoG recordings in severely paralyzed chronic stroke patients

    NASA Astrophysics Data System (ADS)

    Spüler, M.; Walter, A.; Ramos-Murguialday, A.; Naros, G.; Birbaumer, N.; Gharabaghi, A.; Rosenstiel, W.; Bogdan, M.

    2014-12-01

    Objective. Recently, there have been several approaches to utilize a brain-computer interface (BCI) for rehabilitation with stroke patients or as an assistive device for the paralyzed. In this study we investigated whether up to seven different hand movement intentions can be decoded from epidural electrocorticography (ECoG) in chronic stroke patients. Approach. In a screening session we recorded epidural ECoG data over the ipsilesional motor cortex from four chronic stroke patients who had no residual hand movement. Data was analyzed offline using a support vector machine (SVM) to decode different movement intentions. Main results. We showed that up to seven hand movement intentions can be decoded with an average accuracy of 61% (chance level 15.6%). When reducing the number of classes, average accuracies up to 88% can be achieved for decoding three different movement intentions. Significance. The findings suggest that ipsilesional epidural ECoG can be used as a viable control signal for BCI-driven neuroprosthesis. Although patients showed no sign of residual hand movement, brain activity at the ipsilesional motor cortex still shows enough intention-related activity to decode different movement intentions with sufficient accuracy.

  10. More than meets the eye: the role of self-identity in decoding complex emotional states.

    PubMed

    Stevenson, Michael T; Soto, José A; Adams, Reginald B

    2012-10-01

    Folk wisdom asserts that "the eyes are the window to the soul," and empirical science corroborates a prominent role for the eyes in the communication of emotion. Herein we examine variation in the ability to "read" the eyes of others as a function of social group membership, employing a widely used emotional state decoding task: "Reading the Mind in Eyes." This task has documented impaired emotional state decoding across racial groups, with cross-race performance on par with that previously reported as a function of autism spectrum disorders. The present study extended this work by examining the moderating role of social identity in such impairments. For college students more highly identified with their university, cross-race performance differences were not found for judgments of "same-school" eyes but remained for "rival-school" eyes. These findings suggest that impaired emotional state decoding across groups may thus be more amenable to remediation than previously realized.

  11. Energy-efficient constellations design and fast decoding for space-collaborative MIMO visible light communications

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Jun; Liang, Wang-Feng; Wang, Chao; Wang, Wen-Ya

    2017-01-01

    In this paper, space-collaborative constellations (SCCs) for indoor multiple-input multiple-output (MIMO) visible light communication (VLC) systems are considered. Compared with traditional VLC MIMO techniques, such as repetition coding (RC), spatial modulation (SM) and spatial multiplexing (SMP), SCC achieves the minimum average optical power for a fixed minimum Euclidean distance. We have presented a unified SCC structure for 2×2 MIMO VLC systems and extended it to larger MIMO VLC systems with more transceivers. Specifically for 2×2 MIMO VLC, a fast decoding algorithm is developed with decoding complexity almost linear in terms of the square root of the cardinality of SCC, and the expressions of symbol error rate of SCC are presented. In addition, bit mappings similar to Gray mapping are proposed for SCC. Computer simulations are performed to verify the fast decoding algorithm and the performance of SCC, and the results demonstrate that the performance of SCC is better than those of RC, SM and SMP for indoor channels in general.

  12. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  13. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.

  14. Evaluation of selective control information detection scheme in orthogonal frequency division multiplexing-based radio-over-fiber and visible light communication links

    NASA Astrophysics Data System (ADS)

    Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.

    2017-05-01

    Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.

  15. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  16. The effect of artichoke (Cynara scolymus L.) extract on respiratory chain system activity in rat liver mitochondria.

    PubMed

    Juzyszyn, Z; Czerny, B; Myśliwiec, Z; Pawlik, A; Droździk, M

    2010-06-01

    The effect of artichoke extract on mitochondrial respiratory chain (MRC) activity in isolated rat liver mitochondria (including reaction kinetics) was studied. The effect of the extract on the activity of isolated cytochrome oxidase was also studied. Extract in the range of 0.68-2.72 microg/ml demonstrated potent and concentration-dependent inhibitory activity. Concentrations > or =5.4 microg/ml entirely inhibited MRC activity. The succinate oxidase system (MRC complexes II-IV) was the most potently inhibited, its activity at an extract concentration of 1.36 microg/ml being reduced by 63.3% compared with the control (p < 0.05). The results suggest a complex inhibitory mechanism of the extract. Inhibition of the succinate oxidase system was competitive (K(i) = 0.23 microg/ml), whereas isolated cytochrome oxidase was inhibited noncompetitively (K(i) = 126 microg/ml). The results of this study suggest that the salubrious effects of artichoke extracts may rely in part on the effects of their active compounds on the activity of the mitochondrial respiratory chain system.

  17. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  18. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  19. Multicenter Study of Epidemiological Cutoff Values and Detection of Resistance in Candida spp. to Anidulafungin, Caspofungin, and Micafungin Using the Sensititre YeastOne Colorimetric Method

    PubMed Central

    Alvarez-Fernandez, M.; Cantón, E.; Carver, P. L.; Chen, S. C.-A.; Eschenauer, G.; Getsinger, D. L.; Gonzalez, G. M.; Grancini, A.; Hanson, K. E.; Kidd, S. E.; Klinker, K.; Kubin, C. J.; Kus, J. V.; Lockhart, S. R.; Meletiadis, J.; Morris, A. J.; Pelaez, T.; Rodriguez-Iglesias, M.; Sánchez-Reus, F.; Shoham, S.; Wengenack, N. L.; Borrell Solé, N.; Echeverria, J.; Esperalba, J.; Gómez-G. de la Pedrosa, E.; García García, I.; Linares, M. J.; Marco, F.; Merino, P.; Pemán, J.; Pérez del Molino, L.; Roselló Mayans, E.; Rubio Calvo, C.; Ruiz Pérez de Pipaon, M.; Yagüe, G.; Garcia-Effron, G.; Perlin, D. S.; Sanguinetti, M.; Shields, R.; Turnidge, J.

    2015-01-01

    Neither breakpoints (BPs) nor epidemiological cutoff values (ECVs) have been established for Candida spp. with anidulafungin, caspofungin, and micafungin when using the Sensititre YeastOne (SYO) broth dilution colorimetric method. In addition, reference caspofungin MICs have so far proven to be unreliable. Candida species wild-type (WT) MIC distributions (for microorganisms in a species/drug combination with no detectable phenotypic resistance) were established for 6,007 Candida albicans, 186 C. dubliniensis, 3,188 C. glabrata complex, 119 C. guilliermondii, 493 C. krusei, 205 C. lusitaniae, 3,136 C. parapsilosis complex, and 1,016 C. tropicalis isolates. SYO MIC data gathered from 38 laboratories in Australia, Canada, Europe, Mexico, New Zealand, South Africa, and the United States were pooled to statistically define SYO ECVs. ECVs for anidulafungin, caspofungin, and micafungin encompassing ≥97.5% of the statistically modeled population were, respectively, 0.12, 0.25, and 0.06 μg/ml for C. albicans, 0.12, 0.25, and 0.03 μg/ml for C. glabrata complex, 4, 2, and 4 μg/ml for C. parapsilosis complex, 0.5, 0.25, and 0.06 μg/ml for C. tropicalis, 0.25, 1, and 0.25 μg/ml for C. krusei, 0.25, 1, and 0.12 μg/ml for C. lusitaniae, 4, 2, and 2 μg/ml for C. guilliermondii, and 0.25, 0.25, and 0.12 μg/ml for C. dubliniensis. Species-specific SYO ECVs for anidulafungin, caspofungin, and micafungin correctly classified 72 (88.9%), 74 (91.4%), 76 (93.8%), respectively, of 81 Candida isolates with identified fks mutations. SYO ECVs may aid in detecting non-WT isolates with reduced susceptibility to anidulafungin, micafungin, and especially caspofungin, since testing the susceptibilities of Candida spp. to caspofungin by reference methodologies is not recommended. PMID:26282428

  20. Deep hierarchical attention network for video description

    NASA Astrophysics Data System (ADS)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  1. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  2. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex

    PubMed Central

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-01-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  3. Reading Comprehension in a Large Cohort of French First Graders from Low Socio-Economic Status Families: A 7-Month Longitudinal Study

    PubMed Central

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne; Colé, Pascale

    2013-01-01

    Background The literature suggests that a complex relationship exists between the three main skills involved in reading comprehension (decoding, listening comprehension and vocabulary) and that this relationship depends on at least three other factors orthographic transparency, children’s grade level and socioeconomic status (SES). This study investigated the relative contribution of the predictors of reading comprehension in a longitudinal design (from beginning to end of the first grade) in 394 French children from low SES families. Methodology/Principal findings Reading comprehension was measured at the end of the first grade using two tasks one with short utterances and one with a medium length narrative text. Accuracy in listening comprehension and vocabulary, and fluency of decoding skills, were measured at the beginning and end of the first grade. Accuracy in decoding skills was measured only at the beginning. Regression analyses showed that listening comprehension and decoding skills (accuracy and fluency) always significantly predicted reading comprehension. The contribution of decoding was greater when reading comprehension was assessed via the task using short utterances. Between the two assessments, the contribution of vocabulary, and of decoding skills especially, increased, while that of listening comprehension remained unchanged. Conclusion/Significance These results challenge the ‘simple view of reading’. They also have educational implications, since they show that it is possible to assess decoding and reading comprehension very early on in an orthography (i.e., French), which is less deep than the English one even in low SES children. These assessments, associated with those of listening comprehension and vocabulary, may allow early identification of children at risk for reading difficulty, and to set up early remedial training, which is the most effective, for them. PMID:24250802

  4. The role of ECoG magnitude and phase in decoding position, velocity, and acceleration during continuous motor behavior

    PubMed Central

    Hammer, Jiri; Fischer, Jörg; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio

    2013-01-01

    In neuronal population signals, including the electroencephalogram (EEG) and electrocorticogram (ECoG), the low-frequency component (LFC) is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI) applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold) and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy) “copy” of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative LFC of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance. PMID:24198757

  5. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  6. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  7. Music models aberrant rule decoding and reward valuation in dementia

    PubMed Central

    Clark, Camilla N; Golden, Hannah L; McCallion, Oliver; Nicholas, Jennifer M; Cohen, Miriam H; Slattery, Catherine F; Paterson, Ross W; Fletcher, Phillip D; Mummery, Catherine J; Rohrer, Jonathan D; Crutch, Sebastian J; Warren, Jason D

    2018-01-01

    Abstract Aberrant rule- and reward-based processes underpin abnormalities of socio-emotional behaviour in major dementias. However, these processes remain poorly characterized. Here we used music to probe rule decoding and reward valuation in patients with frontotemporal dementia (FTD) syndromes and Alzheimer’s disease (AD) relative to healthy age-matched individuals. We created short melodies that were either harmonically resolved (‘finished’) or unresolved (‘unfinished’); the task was to classify each melody as finished or unfinished (rule processing) and rate its subjective pleasantness (reward valuation). Results were adjusted for elementary pitch and executive processing; neuroanatomical correlates were assessed using voxel-based morphometry. Relative to healthy older controls, patients with behavioural variant FTD showed impairments of both musical rule decoding and reward valuation, while patients with semantic dementia showed impaired reward valuation but intact rule decoding, patients with AD showed impaired rule decoding but intact reward valuation and patients with progressive non-fluent aphasia performed comparably to healthy controls. Grey matter associations with task performance were identified in anterior temporal, medial and lateral orbitofrontal cortices, previously implicated in computing diverse biological and non-biological rules and rewards. The processing of musical rules and reward distils cognitive and neuroanatomical mechanisms relevant to complex socio-emotional dysfunction in major dementias. PMID:29186630

  8. Coarse-Scale Biases for Spirals and Orientation in Human Visual Cortex

    PubMed Central

    Heeger, David J.

    2013-01-01

    Multivariate decoding analyses are widely applied to functional magnetic resonance imaging (fMRI) data, but there is controversy over their interpretation. Orientation decoding in primary visual cortex (V1) reflects coarse-scale biases, including an over-representation of radial orientations. But fMRI responses to clockwise and counter-clockwise spirals can also be decoded. Because these stimuli are matched for radial orientation, while differing in local orientation, it has been argued that fine-scale columnar selectivity for orientation contributes to orientation decoding. We measured fMRI responses in human V1 to both oriented gratings and spirals. Responses to oriented gratings exhibited a complex topography, including a radial bias that was most pronounced in the peripheral representation, and a near-vertical bias that was most pronounced near the foveal representation. Responses to clockwise and counter-clockwise spirals also exhibited coarse-scale organization, at the scale of entire visual quadrants. The preference of each voxel for clockwise or counter-clockwise spirals was predicted from the preferences of that voxel for orientation and spatial position (i.e., within the retinotopic map). Our results demonstrate a bias for local stimulus orientation that has a coarse spatial scale, is robust across stimulus classes (spirals and gratings), and suffices to explain decoding from fMRI responses in V1. PMID:24336733

  9. Fast attainment of computer cursor control with noninvasively acquired brain signals

    NASA Astrophysics Data System (ADS)

    Bradberry, Trent J.; Gentili, Rodolphe J.; Contreras-Vidal, José L.

    2011-06-01

    Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (~20 min) and subject practice (~20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.

  10. Leveled Reading and Engagement with Complex Texts

    ERIC Educational Resources Information Center

    Hastings, Kathryn

    2016-01-01

    The benefits of engaging with age-appropriate reading materials in classroom settings are numerous. For example, students' comprehension is developed as they acquire new vocabulary and concepts. The Common Core requires all students have daily opportunities to engage with "complex text" regardless of students' decoding levels. However,…

  11. Accelerating a MPEG-4 video decoder through custom software/hardware co-design

    NASA Astrophysics Data System (ADS)

    Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.

  12. A novel depth-of-interaction block detector for positron emission tomography using a dichotomous orthogonal symmetry decoding concept.

    PubMed

    Zhang, Yuxuan; Yan, Han; Baghaei, Hossain; Wong, Wai-Hoi

    2016-02-21

    Conventionally, a dual-end depth-of-interaction (DOI) block detector readout requires two two-dimensional silicon photomultiplier (SiPM) arrays, one on top and one on the bottom, to define the XYZ positions. However, because both the top and bottom SiPM arrays are reading the same pixels, this creates information redundancy. We propose a dichotomous orthogonal symmetric (DOS) dual-end readout block detector design, which removes this redundancy by reducing the number of SiPMs and still achieves XY and DOI (Z) decoding for positron emission tomography (PET) block detector. Reflecting films are used within the block detector to channel photons going to the top of the block to go only in the X direction, and photons going to the bottom are channeled along the Y direction. Despite the unidirectional channeling on each end, the top readout provides both X and Y information using two one-dimensional SiPM arrays instead of a two-dimensional SiPM array; similarly, the bottom readout also provides both X and Y information with just two one-dimensional SiPM arrays. Thus, a total of four one-dimensional SiPM arrays (4  ×  N SiPMs) are used to decode the XYZ positions of the firing pixels instead of two two-dimensional SiPM arrays (2  ×  N  ×  N SiPMs), reducing the number of SiPM arrays per block from 2N(2) to 4 N for PET/MR or PET/CT systems. Moreover, the SiPM arrays on one end can be replaced by two regular photomultiplier tubes (PMTs), so that a block needs only 2 N SiPMs  +  2 half-PMTs; this hybrid-DOS DOI block detector can be used in PET/CT systems. Monte Carlo simulations were carried out to study the performance of our DOS DOI block detector design, including the XY-decoding quality, energy resolution, and DOI resolution. Both BGO and LSO scintillators were studied. We found that 4 mm pixels were well decoded for 5  ×  5 BGO and 9  ×  9 LSO arrays with 4 to 5 mm DOI resolution and 16-20% energy resolution. By adding light-channel decoding, we modified the DOS design to a high-resolution design, which resolved scintillator pixels smaller than the SiPM dimensions. Detector pixels of 2.4 mm were decoded for 8  ×  8 BGO and 15  ×  15 LSO arrays with 5 mm DOI resolution and 20-23% energy resolution. Time performance was also studied for the 8  ×  8 BGO and 15  ×  15 LSO HR-DOS arrays. The timing resolution for the corner and central crystals is 986  ±  122 ps and 1.89  ±  0.17 μs respectively with BGO, 137  ±  42 ps and 458  ±  67 ps respectively with LSO. Monte Carlo simulations with GATE/Geant4 demonstrated the feasibility of our DOS DOI block detector design. In conclusion, our novel design achieved good performance except the time performance while using fewer SiPMs and supporting electronic channels than the current non-DOI PET detectors. This novel design can significantly reduce the cost, heat, and readout complexity of DOI block detectors for PET/MR/CT systems that don't require the time-of-flight capability.

  13. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  14. Evolution of complex fruiting-body morphologies in homobasidiomycetes.

    PubMed Central

    Hibbett, David S; Binder, Manfred

    2002-01-01

    The fruiting bodies of homobasidiomycetes include some of the most complex forms that have evolved in the fungi, such as gilled mushrooms, bracket fungi and puffballs ('pileate-erect') forms. Homobasidiomycetes also include relatively simple crust-like 'resupinate' forms, however, which account for ca. 13-15% of the described species in the group. Resupinate homobasidiomycetes have been interpreted either as a paraphyletic grade of plesiomorphic forms or a polyphyletic assemblage of reduced forms. The former view suggests that morphological evolution in homobasidiomycetes has been marked by independent elaboration in many clades, whereas the latter view suggests that parallel simplification has been a common mode of evolution. To infer patterns of morphological evolution in homobasidiomycetes, we constructed phylogenetic trees from a dataset of 481 species and performed ancestral state reconstruction (ASR) using parsimony and maximum likelihood (ML) methods. ASR with both parsimony and ML implies that the ancestor of the homobasidiomycetes was resupinate, and that there have been multiple gains and losses of complex forms in the homobasidiomycetes. We also used ML to address whether there is an asymmetry in the rate of transformations between simple and complex forms. Models of morphological evolution inferred with ML indicate that the rate of transformations from simple to complex forms is about three to six times greater than the rate of transformations in the reverse direction. A null model of morphological evolution, in which there is no asymmetry in transformation rates, was rejected. These results suggest that there is a 'driven' trend towards the evolution of complex forms in homobasidiomycetes. PMID:12396494

  15. Spatial domain entertainment audio decompression/compression

    NASA Astrophysics Data System (ADS)

    Chan, Y. K.; Tam, Ka Him K.

    2014-02-01

    The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.

  16. Nonlinear detection for a high rate extended binary phase shift keying system.

    PubMed

    Chen, Xian-Qing; Wu, Le-Nan

    2013-03-28

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.

  17. Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System

    PubMed Central

    Chen, Xian-Qing; Wu, Le-Nan

    2013-01-01

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034

  18. Automatic detection and decoding of honey bee waggle dances.

    PubMed

    Wario, Fernando; Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.

  19. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  20. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  1. Linear feature projection-based real-time decoding of limb state from dorsal root ganglion recordings.

    PubMed

    Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan

    2018-05-15

    Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.

  2. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405

  3. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    PubMed

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  4. Pulse transmission receiver with higher-order time derivative pulse correlator

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-16

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a higher-order time derivative pulse correlator; a demodulation decoder coupled to the higher-order time derivative pulse correlator; a clock coupled to the demodulation decoder; and a pseudorandom polynomial generator coupled to both the higher-order time derivative pulse correlator and the clock. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  5. The Brain Is Faster than the Hand in Split-Second Intentions to Respond to an Impending Hazard: A Simulation of Neuroadaptive Automation to Speed Recovery to Perturbation in Flight Attitude.

    PubMed

    Callan, Daniel E; Terzibas, Cengiz; Cassel, Daniel B; Sato, Masa-Aki; Parasuraman, Raja

    2016-01-01

    The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0-352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity related to piloting the airplane.

  6. The Brain Is Faster than the Hand in Split-Second Intentions to Respond to an Impending Hazard: A Simulation of Neuroadaptive Automation to Speed Recovery to Perturbation in Flight Attitude

    PubMed Central

    Callan, Daniel E.; Terzibas, Cengiz; Cassel, Daniel B.; Sato, Masa-aki; Parasuraman, Raja

    2016-01-01

    The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0–352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity related to piloting the airplane. PMID:27199710

  7. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    PubMed

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  8. High-speed wavelength-division multiplexing quantum key distribution system.

    PubMed

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Tanaka, Akihiro; Takahashi, Seigo; Nambu, Yoshihiro; Tomita, Akihisa; Miki, Shigehito; Yamashita, Taro; Wang, Zhen; Sasaki, Masahide; Tajima, Akio

    2012-01-15

    A high-speed quantum key distribution system was developed with the wavelength-division multiplexing (WDM) technique and dedicated key distillation hardware engines. Two interferometers for encoding and decoding are shared over eight wavelengths to reduce the system's size, cost, and control complexity. The key distillation engines can process a huge amount of data from the WDM channels by using a 1 Mbit block in real time. We demonstrated a three-channel WDM system that simultaneously uses avalanche photodiodes and superconducting single-photon detectors. We achieved 12 h continuous key generation with a secure key rate of 208 kilobits per second through a 45 km field fiber with 14.5 dB loss.

  9. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    PubMed

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  10. Automatic detection and decoding of honey bee waggle dances

    PubMed Central

    Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer’s movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system’s performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance. PMID:29236712

  11. Improving brain-machine interface performance by decoding intended future movements

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Suminski, Aaron J.; Fagg, Andrew H.; Hatsopoulos, Nicholas G.

    2013-04-01

    Objective. A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.

  12. New architecture for dynamic frame-skipping transcoder.

    PubMed

    Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi

    2002-01-01

    Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.

  13. Face processing in chronic alcoholism: a specific deficit for emotional features.

    PubMed

    Maurage, P; Campanella, S; Philippot, P; Martin, S; de Timary, P

    2008-04-01

    It is well established that chronic alcoholism is associated with a deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specifically for emotions or due to a more general impairment in visual or facial processing. This study was designed to clarify this issue using multiple control tasks and the subtraction method. Eighteen patients suffering from chronic alcoholism and 18 matched healthy control subjects were asked to perform several tasks evaluating (1) Basic visuo-spatial and facial identity processing; (2) Simple reaction times; (3) Complex facial features identification (namely age, emotion, gender, and race). Accuracy and reaction times were recorded. Alcoholic patients had a preserved performance for visuo-spatial and facial identity processing, but their performance was impaired for visuo-motor abilities and for the detection of complex facial aspects. More importantly, the subtraction method showed that alcoholism is associated with a specific EFE decoding deficit, still present when visuo-motor slowing down is controlled for. These results offer a post hoc confirmation of earlier data showing an EFE decoding deficit in alcoholism by strongly suggesting a specificity of this deficit for emotions. This may have implications for clinical situations, where emotional impairments are frequently observed among alcoholic subjects.

  14. Mapping of MPEG-4 decoding on a flexible architecture platform

    NASA Astrophysics Data System (ADS)

    van der Tol, Erik B.; Jaspers, Egbert G.

    2001-12-01

    In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.

  15. Triage and the Lost Art of Decoding Vital Signs: Restoring Physiologically Based Triage Skills in Complex Humanitarian Emergencies.

    PubMed

    Burkle, Frederick M

    2018-02-01

    Triage management remains a major challenge, especially in resource-poor settings such as war, complex humanitarian emergencies, and public health emergencies in developing countries. In triage it is often the disruption of physiology, not anatomy, that is critical, supporting triage methodology based on clinician-assessed physiological parameters as well as anatomy and mechanism of injury. In recent times, too many clinicians from developed countries have deployed to humanitarian emergencies without the physical exam skills needed to assess patients without the benefit of remotely fed electronic monitoring, laboratory, and imaging studies. In triage, inclusion of the once-widely accepted and collectively taught "art of decoding vital signs" with attention to their character and meaning may provide clues to a patient's physiological state, improving triage sensitivity. Attention to decoding vital signs is not a triage methodology of its own or a scoring system, but rather a skill set that supports existing triage methodologies. With unique triage management challenges being raised by an ever-changing variety of humanitarian crises, these once useful skill sets need to be revisited, understood, taught, and utilized by triage planners, triage officers, and teams as a necessary adjunct to physiologically based triage decision-making. (Disaster Med Public Health Preparedness. 2018;12:76-85).

  16. Planning Ahead: Object-Directed Sequential Actions Decoded from Human Frontoparietal and Occipitotemporal Networks

    PubMed Central

    Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.

    2016-01-01

    Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538

  17. Study of the OCDMA Transmission Characteristics in FSO-FTTH at Various Distances, Outdoor

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana Y.; Aljunid, S. A.; Fadhil, Hilal A.

    2013-06-01

    It is important to apply the field Programmable Gate Array (FPGA), and Optical Switch technology as an encoder and decoder for Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) Free Space Optic Fiber to the Home (FSO-FTTH) transmitter and receiver system design. The encoder and decoder module will be using FPGA as a code generator, optical switch using as encode and decode of optical source. This module was tested by using the Modified Double Weight (MDW) code, which is selected as an excellent candidate because it had shown superior performance were by the total noise is reduced. It is also easy to construct and can reduce the number of filters required at a receiver by a newly proposed detection scheme known as AND Subtraction technique. MDW code is presented here to support Fiber-To-The-Home (FTTH) access network in Point-To-Multi-Point (P2MP) application. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. The performances are characterized through BER and bit rate (BR), also, the received power at a variety of bit rates.

  18. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  19. Efficient coding and detection of ultra-long IDs for visible light positioning systems.

    PubMed

    Zhang, Hualong; Yang, Chuanchuan

    2018-05-14

    Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.

  20. Formal Verification of Complex Systems based on SysML Functional Requirements

    DTIC Science & Technology

    2014-12-23

    Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools

  1. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  2. Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression.

    PubMed

    Raz, Gal; Svanera, Michele; Singer, Neomi; Gilam, Gadi; Cohen, Maya Bleich; Lin, Tamar; Admon, Roee; Gonen, Tal; Thaler, Avner; Granot, Roni Y; Goebel, Rainer; Benini, Sergio; Valente, Giancarlo

    2017-12-01

    Major methodological advancements have been recently made in the field of neural decoding, which is concerned with the reconstruction of mental content from neuroimaging measures. However, in the absence of a large-scale examination of the validity of the decoding models across subjects and content, the extent to which these models can be generalized is not clear. This study addresses the challenge of producing generalizable decoding models, which allow the reconstruction of perceived audiovisual features from human magnetic resonance imaging (fMRI) data without prior training of the algorithm on the decoded content. We applied an adapted version of kernel ridge regression combined with temporal optimization on data acquired during film viewing (234 runs) to generate standardized brain models for sound loudness, speech presence, perceived motion, face-to-frame ratio, lightness, and color brightness. The prediction accuracies were tested on data collected from different subjects watching other movies mainly in another scanner. Substantial and significant (Q FDR <0.05) correlations between the reconstructed and the original descriptors were found for the first three features (loudness, speech, and motion) in all of the 9 test movies (R¯=0.62, R¯ = 0.60, R¯ = 0.60, respectively) with high reproducibility of the predictors across subjects. The face ratio model produced significant correlations in 7 out of 8 movies (R¯=0.56). The lightness and brightness models did not show robustness (R¯=0.23, R¯ = 0). Further analysis of additional data (95 runs) indicated that loudness reconstruction veridicality can consistently reveal relevant group differences in musical experience. The findings point to the validity and generalizability of our loudness, speech, motion, and face ratio models for complex cinematic stimuli (as well as for music in the case of loudness). While future research should further validate these models using controlled stimuli and explore the feasibility of extracting more complex models via this method, the reliability of our results indicates the potential usefulness of the approach and the resulting models in basic scientific and diagnostic contexts. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  4. [Efficacy and safety of heptral, vitamin B6 and folic acid during toxic hepatitis induced by CCL4].

    PubMed

    Antelava, N A; Gogoluari, M I; Gogoluari, L I; Pirtskhalaĭshvili, N N; Okudzhava, M V

    2007-09-01

    The aim of this work was to evaluate of efficacy and safety of complex Heptral, Vitamin B6 and Folic Acid in experimental hepatitis therapy compared with monotherapy. Experiments were carried out on pubertal rats. Eperimental hepatitis models were induced by Tetrachlormethane. The tetrachlormethane intoxication was reproduced by subcutaneous injection of CCL(4) 1ml/kg dissolved in 1ml of olive oil. Cytochrome P450, cytochrome b5, reduced glutation,activity of glutationetranspherase and content of ATP in hepatocytes were measured by the spectrophotometric techniques,but content of homocysteine by chromophtography techniques. Under CCL(4) intoxication disturbance of liver detoxication function, energy deficit and surplus of homocysteine were observed. Treatment of the toxic hepatitis with heptral increased the level of cytochrome P450, cytochrome b5, glutation activity of glutationetranspherase glutathione and reduced content of homocysteine. Complex therapy with Heptral and B6 and folic acid reveal more expressive hepatoprotective effect and safety than monotherapy with Heptral. Complex therapy improves not only the parameters of biotransformation (metabolic and conjugation phase), but also normalizes the level of ATP and homocystein. Vitamins B6 and folic acid increases the efficacy and safety of Heptral. This complex was recomended for treatment of hepatitis.

  5. Synthesis of biocompatible nanoparticle drug complexes for inhibition of mycobacteria

    NASA Astrophysics Data System (ADS)

    Bhave, Tejashree; Ghoderao, Prachi; Sanghavi, Sonali; Babrekar, Harshada; Bhoraskar, S. V.; Ganesan, V.; Kulkarni, Anjali

    2013-12-01

    Tuberculosis (TB) is one of the most critical infectious diseases affecting the world today. Current TB treatment involves six months long daily administration of four oral doses of antibiotics. Due to severe side effects and the long treatment, a patient's adherence is low and this results in relapse of symptoms causing an alarming increase in the prevalence of multi-drug resistant (MDR) TB. Hence, it is imperative to develop a new drug delivery technology wherein these effects can be reduced. Rifampicin (RIF) is one of the widely used anti-tubercular drugs (ATD). The present study discusses the development of biocompatible nanoparticle-RIF complexes with superior inhibitory activity against both Mycobacterium smegmatis (M. smegmatis) and Mycobacterium tuberculosis (M. tuberculosis). Iron oxide nanoparticles (NPs) synthesized by gas phase condensation and NP-RIF complexes were tested against M. smegmatis SN2 strain as well as M. tuberculosis H37Rv laboratory strain. These complexes showed significantly better inhibition of M. smegmatis SN2 strain at a much lower effective concentration (27.5 μg ml-1) as compared to neat RIF (125 μg ml-1). Similarly M. tuberculosis H37Rv laboratory strain was susceptible to both nanoparticle-RIF complex and neat RIF at a minimum inhibitory concentration of 0.22 and 1 μg ml-1, respectively. Further studies are underway to determine the efficacy of NPs-RIF complexes in clinical isolates of M. tuberculosis as well as MDR isolates.

  6. An efficient decoding for low density parity check codes

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.

  7. A learning scheme for reach to grasp movements: on EMG-based interfaces using task specific motion decoding models.

    PubMed

    Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S

    2013-09-01

    A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.

  8. Resting-state brain activity in the motor cortex reflects task-induced activity: A multi-voxel pattern analysis.

    PubMed

    Kusano, Toshiki; Kurashige, Hiroki; Nambu, Isao; Moriguchi, Yoshiya; Hanakawa, Takashi; Wada, Yasuhiro; Osu, Rieko

    2015-08-01

    It has been suggested that resting-state brain activity reflects task-induced brain activity patterns. In this study, we examined whether neural representations of specific movements can be observed in the resting-state brain activity patterns of motor areas. First, we defined two regions of interest (ROIs) to examine brain activity associated with two different behavioral tasks. Using multi-voxel pattern analysis with regularized logistic regression, we designed a decoder to detect voxel-level neural representations corresponding to the tasks in each ROI. Next, we applied the decoder to resting-state brain activity. We found that the decoder discriminated resting-state neural activity with accuracy comparable to that associated with task-induced neural activity. The distribution of learned weighted parameters for each ROI was similar for resting-state and task-induced activities. Large weighted parameters were mainly located on conjunctive areas. Moreover, the accuracy of detection was higher than that for a decoder whose weights were randomly shuffled, indicating that the resting-state brain activity includes multi-voxel patterns similar to the neural representation for the tasks. Therefore, these results suggest that the neural representation of resting-state brain activity is more finely organized and more complex than conventionally considered.

  9. Unified Theory for Decoding the Signals from X-Ray Florescence and X-Ray Diffraction of Mixtures.

    PubMed

    Chung, Frank H

    2017-05-01

    For research and development or for solving technical problems, we often need to know the chemical composition of an unknown mixture, which is coded and stored in the signals of its X-ray fluorescence (XRF) and X-ray diffraction (XRD). X-ray fluorescence gives chemical elements, whereas XRD gives chemical compounds. The major problem in XRF and XRD analyses is the complex matrix effect. The conventional technique to deal with the matrix effect is to construct empirical calibration lines with standards for each element or compound sought, which is tedious and time-consuming. A unified theory of quantitative XRF analysis is presented here. The idea is to cancel the matrix effect mathematically. It turns out that the decoding equation for quantitative XRF analysis is identical to that for quantitative XRD analysis although the physics of XRD and XRF are fundamentally different. The XRD work has been published and practiced worldwide. The unified theory derives a new intensity-concentration equation of XRF, which is free from the matrix effect and valid for a wide range of concentrations. The linear decoding equation establishes a constant slope for each element sought, hence eliminating the work on calibration lines. The simple linear decoding equation has been verified by 18 experiments.

  10. Low RNA translation activit limits the efficacy of hydrodynamic gene transfer to pig liver in vivo.

    PubMed

    Sendra, Luis; Carreño, Omar; Miguel, Antonio; Montalvá, Eva; Herrero, María José; Orbis, Francisco; Noguera, Inmaculada; Barettino, Domingo; López-Andújar, Rafael; Aliño, Salvador F

    2014-01-01

    Hydrodynamic gene delivery has proved an efficient strategy for nonviral gene therapy in the murine liver but it has been less efficient in pigs. The reason for such inefficiency remains unclear. The present study used a surgical strategy to seal the whole pig liver in vivo. A solution of enhanced green fluorescent protein (eGFP) DNA was injected under two different venous injection conditions (anterograde and retrograde), employing flow rates of 10 and 20 ml/s in each case, with the aim of identifying the best gene transfer conditions. The gene delivery and information decoding steps were evaluated by measuring the eGFP DNA, mRNA and protein copy number 24 h after transfection. In addition, gold nanoparticles (diameters of 4 and 15 nm) were retrogradely injected (10 ml/s) to observe, by electron microscopy, the ability of the particle to access the hepatocyte. The gene delivery level was higher with anterograde injection, whereas the efficacy of gene expression was better with retrograde injection, suggesting differences in the decoding processes. Thus, retrograde injection mediates gene transcription (mRNA copy/cell) equivalent to that of intermediate expression proteins but the mRNA translation was lower than that of rare proteins. Electron microscopy showed that nanoparticles within the hepatocyte were almost exclusively 4 nm in diameter. The results suggest that the low activity of mRNA translation limits the final efficacy of the gene transfer procedure. On the other hand, the gold nanoparticles study suggests that elongated DNA conformation could offer advantages in that the access of 15-nm particles is very limited. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Adiponectin complexes composition in Japanese-Brazilians regarding their glucose tolerance status.

    PubMed

    Crispim, Felipe; Vendramini, Marcio F; Moisés, Regina S

    2013-04-09

    Adiponectin circulates in different multimer complexes comprised of low molecular weight trimeric form (LMW), hexamer of middle molecular weight (MMW) and high molecular weight multimers (HMW). In Japanese-Brazilians, a population with high prevalence of glucose metabolism disturbances, we examined the associations of total adiponectin and its multimers with diabetes mellitus. Two study groups were examined: 26 patients with diabetes mellitus (DM,14 women and 12 men, aged 55.3 ± 8.6 years) and 27 age-matched control subjects with normal glucose tolerance (NGT,12 women and 15 men, aged 54.0 ± 9.2 years). We found no significant differences in total [NGT: 6.90 ug/ml (4.38-13.43); DM: 5.38 ug/ml (3.76-8.56), p = 0.35], MMW [NGT:2.34 ug/ml (1.38-3.25); DM: 1.80 ug/ml (1.18-2.84), p = 0.48] or LMW adiponectin [NGT: 2.07 ug/ml (1.45-3.48), DM: 2.93 ug/ml (1.78-3.99), p = 0.32] between groups. In contrast, HMW adiponectin levels were significantly lower in patients with DM [TGN: 2.39 ug/ml (1.20-4.75); DM: 1.04 ug/ml (0.42-1.60), p = 0.001]. A logistic regression analysis was done to identify independent associations with diabetes mellitus. The results showed that HOMA-IR and HMW adiponectin in women were independently associated with diabetes mellitus. The current investigation demonstrates that in Japanese-Brazilians HMW adiponectin is selectively reduced in individuals with type 2 diabetes, while no differences were found in MMW and LMW adiponectin isoforms.

  12. Adiponectin complexes composition in Japanese-Brazilians regarding their glucose tolerance status

    PubMed Central

    2013-01-01

    Background Adiponectin circulates in different multimer complexes comprised of low molecular weight trimeric form (LMW), hexamer of middle molecular weight (MMW) and high molecular weight multimers (HMW). In Japanese-Brazilians, a population with high prevalence of glucose metabolism disturbances, we examined the associations of total adiponectin and its multimers with diabetes mellitus. Methods Two study groups were examined: 26 patients with diabetes mellitus (DM,14 women and 12 men, aged 55.3 ± 8.6 years) and 27 age-matched control subjects with normal glucose tolerance (NGT,12 women and 15 men, aged 54.0 ± 9.2 years). Results We found no significant differences in total [NGT: 6.90 ug/ml (4.38-13.43); DM: 5.38 ug/ml (3.76-8.56), p = 0.35], MMW [NGT:2.34 ug/ml (1.38-3.25); DM: 1.80 ug/ml (1.18-2.84), p = 0.48] or LMW adiponectin [NGT: 2.07 ug/ml (1.45-3.48), DM: 2.93 ug/ml (1.78-3.99), p = 0.32] between groups. In contrast, HMW adiponectin levels were significantly lower in patients with DM [TGN: 2.39 ug/ml (1.20-4.75); DM: 1.04 ug/ml (0.42-1.60), p = 0.001]. A logistic regression analysis was done to identify independent associations with diabetes mellitus. The results showed that HOMA-IR and HMW adiponectin in women were independently associated with diabetes mellitus. Conclusion The current investigation demonstrates that in Japanese-Brazilians HMW adiponectin is selectively reduced in individuals with type 2 diabetes, while no differences were found in MMW and LMW adiponectin isoforms. PMID:23570346

  13. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  14. Enhanced decoding for the Galileo low-gain antenna mission: Viterbi redecoding with four decoding stages

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1995-01-01

    The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.

  15. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  16. Radial Bias Is Not Necessary For Orientation Decoding

    PubMed Central

    Pratte, Michael S.; Sy, Jocelyn L.; Swisher, Jascha D.; Tong, Frank

    2015-01-01

    Multivariate pattern analysis can be used to decode the orientation of a viewed grating from fMRI signals in early visual areas. Although some studies have reported identifying multiple sources of the orientation information that make decoding possible, a recent study argued that orientation decoding is only possible because of a single source: a coarse-scale retinotopically organized preference for radial orientations. Here we aim to resolve these discrepant findings. We show that there were subtle, but critical, experimental design choices that led to the erroneous conclusion that a radial bias is the only source of orientation information in fMRI signals. In particular, we show that the reliance on a fast temporal-encoding paradigm for spatial mapping can be problematic, as effects of space and time become conflated and lead to distorted estimates of a voxel’s orientation or retinotopic preference. When we implement minor changes to the temporal paradigm or to the visual stimulus itself, by slowing the periodic rotation of the stimulus or by smoothing its contrast-energy profile, we find significant evidence of orientation information that does not originate from radial bias. In an additional block-paradigm experiment where space and time were not conflated, we apply a formal model comparison approach and find that many voxels exhibit more complex tuning properties than predicted by radial bias alone or in combination with other known coarse-scale biases. Our findings support the conclusion that radial bias is not necessary for orientation decoding. In addition, our study highlights potential limitations of using temporal phase-encoded fMRI designs for characterizing voxel tuning properties. PMID:26666900

  17. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  18. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  19. Multiclass fMRI data decoding and visualization using supervised self-organizing maps.

    PubMed

    Hausfeld, Lars; Valente, Giancarlo; Formisano, Elia

    2014-08-01

    When multivariate pattern decoding is applied to fMRI studies entailing more than two experimental conditions, a most common approach is to transform the multiclass classification problem into a series of binary problems. Furthermore, for decoding analyses, classification accuracy is often the only outcome reported although the topology of activation patterns in the high-dimensional features space may provide additional insights into underlying brain representations. Here we propose to decode and visualize voxel patterns of fMRI datasets consisting of multiple conditions with a supervised variant of self-organizing maps (SSOMs). Using simulations and real fMRI data, we evaluated the performance of our SSOM-based approach. Specifically, the analysis of simulated fMRI data with varying signal-to-noise and contrast-to-noise ratio suggested that SSOMs perform better than a k-nearest-neighbor classifier for medium and large numbers of features (i.e. 250 to 1000 or more voxels) and similar to support vector machines (SVMs) for small and medium numbers of features (i.e. 100 to 600voxels). However, for a larger number of features (>800voxels), SSOMs performed worse than SVMs. When applied to a challenging 3-class fMRI classification problem with datasets collected to examine the neural representation of three human voices at individual speaker level, the SSOM-based algorithm was able to decode speaker identity from auditory cortical activation patterns. Classification performances were similar between SSOMs and other decoding algorithms; however, the ability to visualize decoding models and underlying data topology of SSOMs promotes a more comprehensive understanding of classification outcomes. We further illustrated this visualization ability of SSOMs with a re-analysis of a dataset examining the representation of visual categories in the ventral visual cortex (Haxby et al., 2001). This analysis showed that SSOMs could retrieve and visualize topography and neighborhood relations of the brain representation of eight visual categories. We conclude that SSOMs are particularly suited for decoding datasets consisting of more than two classes and are optimally combined with approaches that reduce the number of voxels used for classification (e.g. region-of-interest or searchlight approaches). Copyright © 2014. Published by Elsevier Inc.

  20. A piecewise probabilistic regression model to decode hand movement trajectories from epidural and subdural ECoG signals

    NASA Astrophysics Data System (ADS)

    Farrokhi, Behraz; Erfanian, Abbas

    2018-06-01

    Objective. The primary concern of this study is to develop a probabilistic regression method that would improve the decoding of the hand movement trajectories from epidural ECoG as well as from subdural ECoG signals. Approach. The model is characterized by the conditional expectation of the hand position given the ECoG signals. The conditional expectation of the hand position is then modeled by a linear combination of the conditional probability density functions defined for each segment of the movement. Moreover, a spatial linear filter is proposed for reducing the dimension of the feature space. The spatial linear filter is applied to each frequency band of the ECoG signals and extract the features with highest decoding performance. Main results. For evaluating the proposed method, a dataset including 28 ECoG recordings from four adult Japanese macaques is used. The results show that the proposed decoding method outperforms the results with respect to the state of the art methods using this dataset. The relative kinematic information of each frequency band is also investigated using mutual information and decoding performance. The decoding performance shows that the best performance was obtained for high gamma bands from 50 to 200 Hz as well as high frequency ECoG band from 200 to 400 Hz for subdural recordings. However, the decoding performance was decreased for these frequency bands using epidural recordings. The mutual information shows that, on average, the high gamma band from 50 to 200 Hz and high frequency ECoG band from 200 to 400 Hz contain significantly more information than the average of the rest of the frequency bands ≤ft( p<0.001 \\right) for both subdural and epidural recordings. The results of high resolution time-frequency analysis show that ERD/ERS patterns in all frequency bands could reveal the dynamics of the ECoG responses during the movement. The onset and offset of the movement can be clearly identified by the ERD/ERS patterns. Significance. Reliable decoding the kinematic information from the brain signals paves the way for robust control of external devices.

  1. Quantum cryptographic system with reduced data loss

    DOEpatents

    Lo, H.K.; Chau, H.F.

    1998-03-24

    A secure method for distributing a random cryptographic key with reduced data loss is disclosed. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically. 23 figs.

  2. Quantum cryptographic system with reduced data loss

    DOEpatents

    Lo, Hoi-Kwong; Chau, Hoi Fung

    1998-01-01

    A secure method for distributing a random cryptographic key with reduced data loss. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically.

  3. Validation of Automated Scoring of Science Assessments

    ERIC Educational Resources Information Center

    Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.

    2016-01-01

    Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…

  4. Optimum coding techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Sulzer, M. P.; Woodman, R. F.

    1986-01-01

    The optimum coding technique for MST (mesosphere stratosphere troposphere) radars is that which gives the lowest possible sidelobes in practice and can be implemented without too much computing power. Coding techniques are described in Farley (1985). A technique mentioned briefly there but not fully developed and not in general use is discussed here. This is decoding by means of a filter which is not matched to the transmitted waveform, in order to reduce sidelobes below the level obtained with a matched filter. This is the first part of the technique discussed here; the second part consists of measuring the transmitted waveform and using it as the basis for the decoding filter, thus reducing errors due to imperfections in the transmitter. There are two limitations to this technique. The first is a small loss in signal to noise ratio (SNR), which usually is not significant. The second problem is related to incomplete information received at the lowest ranges. An appendix shows a technique for handling this problem. Finally, it is shown that the use of complementary codes on transmission and nonmatched decoding gives the lowest possible sidelobe level and the minimum loss in SNR due to mismatch.

  5. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  6. Information hiding techniques for infrared images: exploring the state-of-the art and challenges

    NASA Astrophysics Data System (ADS)

    Pomponiu, Victor; Cavagnino, Davide; Botta, Marco; Nejati, Hossein

    2015-10-01

    The proliferation of Infrared technology and imaging systems enables a different perspective to tackle many computer vision problems in defense and security applications. Infrared images are widely used by the law enforcement, Homeland Security and military organizations to achieve a significant advantage or situational awareness, and thus is vital to protect these data against malicious attacks. Concurrently, sophisticated malware are developed which are able to disrupt the security and integrity of these digital media. For instance, illegal distribution and manipulation are possible malicious attacks to the digital objects. In this paper we explore the use of a new layer of defense for the integrity of the infrared images through the aid of information hiding techniques such as watermarking. In this context, we analyze the efficiency of several optimal decoding schemes for the watermark inserted into the Singular Value Decomposition (SVD) domain of the IR images using an additive spread spectrum (SS) embedding framework. In order to use the singular values (SVs) of the IR images with the SS embedding we adopt several restrictions that ensure that the values of the SVs will maintain their statistics. For both the optimal maximum likelihood decoder and sub-optimal decoders we assume that the PDF of SVs can be modeled by the Weibull distribution. Furthermore, we investigate the challenges involved in protecting and assuring the integrity of IR images such as data complexity and the error probability behavior, i.e., the probability of detection and the probability of false detection, for the applied optimal decoders. By taking into account the efficiency and the necessary auxiliary information for decoding the watermark, we discuss the suitable decoder for various operating situations. Experimental results are carried out on a large dataset of IR images to show the imperceptibility and efficiency of the proposed scheme against various attack scenarios.

  7. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  8. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  9. Affective Brain-Computer Interfaces As Enabling Technology for Responsive Psychiatric Stimulation

    PubMed Central

    Widge, Alik S.; Dougherty, Darin D.; Moritz, Chet T.

    2014-01-01

    There is a pressing clinical need for responsive neurostimulators, which sense a patient’s brain activity and deliver targeted electrical stimulation to suppress unwanted symptoms. This is particularly true in psychiatric illness, where symptoms can fluctuate throughout the day. Affective BCIs, which decode emotional experience from neural activity, are a candidate control signal for responsive stimulators targeting the limbic circuit. Present affective decoders, however, cannot yet distinguish pathologic from healthy emotional extremes. Indiscriminate stimulus delivery would reduce quality of life and may be actively harmful. We argue that the key to overcoming this limitation is to specifically decode volition, in particular the patient’s intention to experience emotional regulation. Those emotion-regulation signals already exist in prefrontal cortex (PFC), and could be extracted with relatively simple BCI algorithms. We describe preliminary data from an animal model of PFC-controlled limbic brain stimulation and discuss next steps for pre-clinical testing and possible translation. PMID:25580443

  10. An integrated approach to improving noisy speech perception

    NASA Astrophysics Data System (ADS)

    Koval, Serguei; Stolbov, Mikhail; Smirnova, Natalia; Khitrov, Mikhail

    2002-05-01

    For a number of practical purposes and tasks, experts have to decode speech recordings of very poor quality. A combination of techniques is proposed to improve intelligibility and quality of distorted speech messages and thus facilitate their comprehension. Along with the application of noise cancellation and speech signal enhancement techniques removing and/or reducing various kinds of distortions and interference (primarily unmasking and normalization in time and frequency fields), the approach incorporates optimal listener expert tactics based on selective listening, nonstandard binaural listening, accounting for short-term and long-term human ear adaptation to noisy speech, as well as some methods of speech signal enhancement to support speech decoding during listening. The approach integrating the suggested techniques ensures high-quality ultimate results and has successfully been applied by Speech Technology Center experts and by numerous other users, mainly forensic institutions, to perform noisy speech records decoding for courts, law enforcement and emergency services, accident investigation bodies, etc.

  11. A novel parallel pipeline structure of VP9 decoder

    NASA Astrophysics Data System (ADS)

    Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan

    2018-04-01

    To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.

  12. Singer product apertures-A coded aperture system with a fast decoding algorithm

    NASA Astrophysics Data System (ADS)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  13. Decoding molecular interactions in microbial communities

    PubMed Central

    Abreu, Nicole A.; Taga, Michiko E.

    2016-01-01

    Microbial communities govern numerous fundamental processes on earth. Discovering and tracking molecular interactions among microbes is critical for understanding how single species and complex communities impact their associated host or natural environment. While recent technological developments in DNA sequencing and functional imaging have led to new and deeper levels of understanding, we are limited now by our inability to predict and interpret the intricate relationships and interspecies dependencies within these communities. In this review, we highlight the multifaceted approaches investigators have taken within their areas of research to decode interspecies molecular interactions that occur between microbes. Understanding these principles can give us greater insight into ecological interactions in natural environments and within synthetic consortia. PMID:27417261

  14. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  15. Contemporary bloodletting in cardiac surgical care.

    PubMed

    Koch, Colleen G; Reineks, Edmunds Z; Tang, Anne S; Hixson, Eric D; Phillips, Shannon; Sabik, Joseph F; Henderson, J Michael; Blackstone, Eugene H

    2015-03-01

    Health care providers are seldom aware of the frequency and volume of phlebotomy for laboratory testing, bloodletting that often leads to hospital-acquired anemia. Our objectives were to examine the frequency of laboratory testing in patients undergoing cardiac surgery, calculate cumulative phlebotomy volume from time of initial surgical consultation to hospital discharge, and propose strategies to reduce phlebotomy volume. From January 1, 2012 to June 30, 2012, 1,894 patients underwent cardiac surgery at Cleveland Clinic; 1,867 had 1 hospitalization and 27 had 2. Each laboratory test was associated with a test name and blood volume. Phlebotomy volume was estimated separately for the intensive care unit (ICU), hospital floors, and cumulatively. A total of 221,498 laboratory tests were performed, averaging 115 tests per patient. The most frequently performed tests were 88,068 blood gas analyses, 39,535 coagulation tests, 30,421 complete blood counts, and 29,374 metabolic panels. Phlebotomy volume differed between ICU and hospital floors, with median volumes of 332 mL and 118 mL, respectively. Cumulative median volume for the entire hospital stay was 454 mL. More complex procedures were associated with higher overall phlebotomy volume than isolated procedures; eg, combined coronary artery bypass grafting (CABG) and valve procedure median volume was 653 mL (25th/75th percentiles, 428 of 1,065 mL) versus 448 mL (284 of 658 mL) for isolated CABG and 338 mL (237 of 619) for isolated valve procedures. We were astonished by the extent of bloodletting, with total phlebotomy volumes approaching amounts equivalent to 1 to 2 red blood cell units. Implementation of process improvement initiatives can potentially reduce phlebotomy volumes and resource utilization. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Rapid colorimetric assay for gentamicin injection.

    PubMed

    Tarbutton, P

    1987-01-01

    A rapid colorimetric method for determining gentamicin concentration in commercial preparations of gentamicin sulfate injection was developed. Methods currently available for measuring gentamicin concentration via its colored complex with cupric ions in alkaline solution were modified to reduce the time required for a single analysis. The alkaline copper tartrate (ACT) reagent solution was prepared such that each milliliter contained 100 mumol cupric sulfate, 210 mumol potassium sodium tartrate, and 1.25 mmol sodium hydroxide. The assay involves mixing 0.3 mL gentamicin sulfate injection 40 mg/mL (of gentamicin), 1.0 mL ACT reagent, and 0.7 mL water; the absorbance of the resulting solution at 560 nm was used to calculate the gentamicin concentration in the sample. For injections containing 10 mg/mL of gentamicin, the amount of the injection was increased to 0.5 mL and water decreased to 0.5 mL. The concentration of gentamicin in samples representing 11 lots of gentamicin sulfate injection 40 mg/mL and 8 lots of gentamicin sulfate injection 10 mg/mL was determined. The specificity, reproducibility, and accuracy of the assay were assessed. The colored complex was stable for at least two hours. Gentamicin concentration ranged from 93.7 to 108% and from 95 to 109% of the stated label value of the 40 mg/mL and the 10 mg/mL injections, respectively. No components of the preservative system present in the injections interfered with the assay. Since other aminoglycosides produced a colored complex, the assay is not specific for gentamicin. The assay was accurate and reproducible over the range of 4-20 mg of gentamicin. This rapid and accurate assay can be easily applied in the hospital pharmacy setting.

  17. Differences in the Predictors of Reading Comprehension in First Graders from Low Socio-Economic Status Families with Either Good or Poor Decoding Skills

    PubMed Central

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519

  18. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    PubMed

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  19. Learning a common dictionary for subject-transfer decoding with resting calibration.

    PubMed

    Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin

    2015-05-01

    Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  1. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  2. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  3. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  4. Decoding Grasping Movements from the Parieto-Frontal Reaching Circuit in the Nonhuman Primate.

    PubMed

    Nelissen, Koen; Fiave, Prosper Agbesi; Vanduffel, Wim

    2018-04-01

    Prehension movements typically include a reaching phase, guiding the hand toward the object, and a grip phase, shaping the hand around it. The dominant view posits that these components rely upon largely independent parieto-frontal circuits: a dorso-medial circuit involved in reaching and a dorso-lateral circuit involved in grasping. However, mounting evidence suggests a more complex arrangement, with dorso-medial areas contributing to both reaching and grasping. To investigate the role of the dorso-medial reaching circuit in grasping, we trained monkeys to reach-and-grasp different objects in the dark and determined if hand configurations could be decoded from functional magnetic resonance imaging (MRI) responses obtained from the reaching and grasping circuits. Indicative of their established role in grasping, object-specific grasp decoding was found in anterior intraparietal (AIP) area, inferior parietal lobule area PFG and ventral premotor region F5 of the lateral grasping circuit, and primary motor cortex. Importantly, the medial reaching circuit also conveyed robust grasp-specific information, as evidenced by significant decoding in parietal reach regions (particular V6A) and dorsal premotor region F2. These data support the proposed role of dorso-medial "reach" regions in controlling aspects of grasping and demonstrate the value of complementing univariate with more sensitive multivariate analyses of functional MRI (fMRI) data in uncovering information coding in the brain.

  5. Translating the "Banana Genome" to Delineate Stress Resistance, Dwarfing, Parthenocarpy and Mechanisms of Fruit Ripening.

    PubMed

    Dash, Prasanta K; Rai, Rhitu

    2016-01-01

    Evolutionary frozen, genetically sterile and globally iconic fruit "Banana" remained untouched by the green revolution and, as of today, researchers face intrinsic impediments for its varietal improvement. Recently, this wonder crop entered the genomics era with decoding of structural genome of double haploid Pahang (AA genome constitution) genotype of Musa acuminata . Its complex genome decoded by hybrid sequencing strategies revealed panoply of genes and transcription factors involved in the process of sucrose conversion that imparts sweetness to its fruit. Historically, banana has faced the wrath of pandemic bacterial, fungal, and viral diseases and multitude of abiotic stresses that has ruined the livelihood of small/marginal farmers' and destroyed commercial plantations. Decoding structural genome of this climacteric fruit has given impetus to a deeper understanding of the repertoire of genes involved in disease resistance, understanding the mechanism of dwarfing to develop an ideal plant type, unraveling the process of parthenocarpy, and fruit ripening for better fruit quality. Further, injunction of comparative genomics will usher in integration of information from its decoded genome and other monocots into field applications in banana related but not limited to yield enhancement, food security, livelihood assurance, and energy sustainability. In this mini review, we discuss pre- and post-genomic discoveries and highlight accomplishments in structural genomics, genetic engineering and forward genetic accomplishments with an aim to target genes and transcription factors for translational research in banana.

  6. Adaptive Chroma Subsampling-binding and Luma-guided Chroma Reconstruction Method for Screen Content Images.

    PubMed

    Chung, Kuo-Liang; Huang, Chi-Chao; Hsu, Tsu-Chun

    2017-09-04

    In this paper, we propose a novel adaptive chroma subsampling-binding and luma-guided (ASBLG) chroma reconstruction method for screen content images (SCIs). After receiving the decoded luma and subsampled chroma image from the decoder, a fast winner-first voting strategy is proposed to identify the used chroma subsampling scheme prior to compression. Then, the decoded luma image is subsampled as the identified subsampling scheme was performed on the chroma image such that we are able to conclude an accurate correlation between the subsampled decoded luma image and the decoded subsampled chroma image. Accordingly, an adaptive sliding window-based and luma-guided chroma reconstruction method is proposed. The related computational complexity analysis is also provided. We take two quality metrics, the color peak signal-to-noise ratio (CPSNR) of the reconstructed chroma images and SCIs and the gradient-based structure similarity index (CGSS) of the reconstructed SCIs to evaluate the quality performance. Let the proposed chroma reconstruction method be denoted as 'ASBLG'. Based on 26 typical test SCIs and 6 JCT-VC test screen content video sequences (SCVs), several experiments show that on average, the CPSNR gains of all the reconstructed UV images by 4:2:0(A)-ASBLG, SCIs by 4:2:0(MPEG-B)-ASBLG, and SCVs by 4:2:0(A)-ASBLG are 2.1 dB, 1.87 dB, and 1.87 dB, respectively, when compared with that of the other combinations. Specifically, in terms of CPSNR and CGSS, CSBILINEAR-ASBLG for the test SCIs and CSBICUBIC-ASBLG for the test SCVs outperform the existing state-of-the-art comparative combinations, where CSBILINEAR and CSBICUBIC denote the luma-aware based chroma subsampling schemes by Wang et al.

  7. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  9. The VLSI design of the sub-band filterbank in MP3 decoding

    NASA Astrophysics Data System (ADS)

    Liu, Jia-Xin; Luo, Li

    2018-03-01

    The sub-band filterbank is one of the most important modules which has the largest amount of calculation in MP3 decoding. In order to save CPU resources and integrate the sub-band filterbank part into MP3 IP core, the hardware circuit of the sub-band filterbank module is designed in this paper. A fast algorithm suit for hardware implementation is proposed and achieved on FPGA development board. The results show that the sub-band filterbank function is correct in the case of using very few registers and the amount of calculation and ROM resources are reduced greatly.

  10. Methods of alleviation of ionospheric scintillation effects on digital communications

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1974-01-01

    The degradation of the performance of digital communication systems because of ionospheric scintillation effects can be reduced either by diversity techniques or by coding. The effectiveness of traditional space-diversity, frequency-diversity and time-diversity techniques is reviewed and design considerations isolated. Time-diversity signaling is then treated as an extremely simple form of coding. More advanced coding methods, such as diffuse threshold decoding and burst-trapping decoding, which appear attractive in combatting scintillation effects are discussed and design considerations noted. Finally, adaptive coding techniques appropriate when the general state of the channel is known are discussed.

  11. The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Shao, H. M.; Deutsch, L. J.

    1987-01-01

    The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.

  12. Highly Decorated Lignins in Leaf Tissues of the Canary Island Date Palm Phoenix canariensis1[OPEN

    PubMed Central

    Bartuce, Allison; Free, Heather C.A.; Smith, Bronwen G.

    2017-01-01

    The cell walls of leaf base tissues of the Canary Island date palm (Phoenix canariensis) contain lignins with the most complex compositions described to date. The lignin composition varies by tissue region and is derived from traditional monolignols (ML) along with an unprecedented range of ML conjugates: ML-acetate, ML-benzoate, ML-p-hydroxybenzoate, ML-vanillate, ML-p-coumarate, and ML-ferulate. The specific functions of such complex lignin compositions are unknown. However, the distribution of the ML conjugates varies depending on the tissue region, indicating that they may play specific roles in the cell walls of these tissues and/or in the plant’s defense system. PMID:28894022

  13. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing

    PubMed Central

    Doelling, Keith; Arnal, Luc; Ghitza, Oded; Poeppel, David

    2013-01-01

    A growing body of research suggests that intrinsic neuronal slow (< 10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the ‘sharpness’ of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility. PMID:23791839

  14. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    PubMed

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  15. Lossless compression algorithm for REBL direct-write e-beam lithography system

    NASA Astrophysics Data System (ADS)

    Cramer, George; Liu, Hsin-I.; Zakhor, Avideh

    2010-03-01

    Future lithography systems must produce microchips with smaller feature sizes, while maintaining throughputs comparable to those of today's optical lithography systems. This places stringent constraints on the effective data throughput of any maskless lithography system. In recent years, we have developed a datapath architecture for direct-write lithography systems, and have shown that compression plays a key role in reducing throughput requirements of such systems. Our approach integrates a low complexity hardware-based decoder with the writers, in order to decompress a compressed data layer in real time on the fly. In doing so, we have developed a spectrum of lossless compression algorithms for integrated circuit layout data to provide a tradeoff between compression efficiency and hardware complexity, the latest of which is Block Golomb Context Copy Coding (Block GC3). In this paper, we present a modified version of Block GC3 called Block RGC3, specifically tailored to the REBL direct-write E-beam lithography system. Two characteristic features of the REBL system are a rotary stage resulting in arbitrarily-rotated layout imagery, and E-beam corrections prior to writing the data, both of which present significant challenges to lossless compression algorithms. Together, these effects reduce the effectiveness of both the copy and predict compression methods within Block GC3. Similar to Block GC3, our newly proposed technique Block RGC3, divides the image into a grid of two-dimensional "blocks" of pixels, each of which copies from a specified location in a history buffer of recently-decoded pixels. However, in Block RGC3 the number of possible copy locations is significantly increased, so as to allow repetition to be discovered along any angle of orientation, rather than horizontal or vertical. Also, by copying smaller groups of pixels at a time, repetition in layout patterns is easier to find and take advantage of. As a side effect, this increases the total number of copy locations to transmit; this is combated with an extra region-growing step, which enforces spatial coherence among neighboring copy locations, thereby improving compression efficiency. We characterize the performance of Block RGC3 in terms of compression efficiency and encoding complexity on a number of rotated Metal 1, Poly, and Via layouts at various angles, and show that Block RGC3 provides higher compression efficiency than existing lossless compression algorithms, including JPEG-LS, ZIP, BZIP2, and Block GC3.

  16. Efficient and flexible memory architecture to alleviate data and context bandwidth bottlenecks of coarse-grained reconfigurable arrays

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun

    2014-12-01

    The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.

  17. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    PubMed

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  18. Force spectroscopy of biomolecular folding and binding: theory meets experiment

    NASA Astrophysics Data System (ADS)

    Dudko, Olga

    2015-03-01

    Conformational transitions in biological macromolecules usually serve as the mechanism that brings biomolecules into their working shape and enables their biological function. Single-molecule force spectroscopy probes conformational transitions by applying force to individual macromolecules and recording their response, or ``mechanical fingerprints,'' in the form of force-extension curves. However, how can we decode these fingerprints so that they reveal the kinetic barriers and the associated timescales of a biological process? I will present an analytical theory of the mechanical fingerprints of macromolecules. The theory is suitable for decoding such fingerprints to extract the barriers and timescales. The application of the theory will be illustrated through recent studies on protein-DNA interactions and the receptor-ligand complexes involved in blood clot formation.

  19. Coding/decoding and reversibility of droplet trains in microfluidic networks.

    PubMed

    Fuerstman, Michael J; Garstecki, Piotr; Whitesides, George M

    2007-02-09

    Droplets of one liquid suspended in a second, immiscible liquid move through a microfluidic device in which a channel splits into two branches that reconnect downstream. The droplets choose a path based on the number of droplets that occupy each branch. The interaction among droplets in the channels results in complex sequences of path selection. The linearity of the flow through the microchannels, however, ensures that the behavior of the system can be reversed. This reversibility makes it possible to encrypt and decrypt signals coded in the intervals between droplets. The encoding/decoding device is a functional microfluidic system that requires droplets to navigate a network in a precise manner without the use of valves, switches, or other means of external control.

  20. A long constraint length VLSI Viterbi decoder for the DSN

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.

    1988-01-01

    A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.

  1. A continuous time-resolved measure decoded from EEG oscillatory activity predicts working memory task performance.

    PubMed

    Astrand, Elaine

    2018-06-01

    Working memory (WM), crucial for successful behavioral performance in most of our everyday activities, holds a central role in goal-directed behavior. As task demands increase, inducing higher WM load, maintaining successful behavioral performance requires the brain to work at the higher end of its capacity. Because it is depending on both external and internal factors, individual WM load likely varies in a continuous fashion. The feasibility to extract such a continuous measure in time that correlates to behavioral performance during a working memory task remains unsolved. Multivariate pattern decoding was used to test whether a decoder constructed from two discrete levels of WM load can generalize to produce a continuous measure that predicts task performance. Specifically, a linear regression with L2-regularization was chosen with input features from EEG oscillatory activity recorded from healthy participants while performing the n-back task, [Formula: see text]. The feasibility to extract a continuous time-resolved measure that correlates positively to trial-by-trial working memory task performance is demonstrated (r  =  0.47, p  <  0.05). It is furthermore shown that this measure allows to predict task performance before action (r  =  0.49, p  <  0.05). We show that the extracted continuous measure enables to study the temporal dynamics of the complex activation pattern of WM encoding during the n-back task. Specifically, temporally precise contributions of different spectral features are observed which extends previous findings of traditional univariate approaches. These results constitute an important contribution towards a wide range of applications in the field of cognitive brain-machine interfaces. Monitoring mental processes related to attention and WM load to reduce the risk of committing errors in high-risk environments could potentially prevent many devastating consequences or using the continuous measure as neurofeedback opens up new possibilities to develop novel rehabilitation techniques for individuals with degraded WM capacity.

  2. A continuous time-resolved measure decoded from EEG oscillatory activity predicts working memory task performance

    NASA Astrophysics Data System (ADS)

    Astrand, Elaine

    2018-06-01

    Objective. Working memory (WM), crucial for successful behavioral performance in most of our everyday activities, holds a central role in goal-directed behavior. As task demands increase, inducing higher WM load, maintaining successful behavioral performance requires the brain to work at the higher end of its capacity. Because it is depending on both external and internal factors, individual WM load likely varies in a continuous fashion. The feasibility to extract such a continuous measure in time that correlates to behavioral performance during a working memory task remains unsolved. Approach. Multivariate pattern decoding was used to test whether a decoder constructed from two discrete levels of WM load can generalize to produce a continuous measure that predicts task performance. Specifically, a linear regression with L2-regularization was chosen with input features from EEG oscillatory activity recorded from healthy participants while performing the n-back task, n\\in [1,2] . Main results. The feasibility to extract a continuous time-resolved measure that correlates positively to trial-by-trial working memory task performance is demonstrated (r  =  0.47, p  <  0.05). It is furthermore shown that this measure allows to predict task performance before action (r  =  0.49, p  <  0.05). We show that the extracted continuous measure enables to study the temporal dynamics of the complex activation pattern of WM encoding during the n-back task. Specifically, temporally precise contributions of different spectral features are observed which extends previous findings of traditional univariate approaches. Significance. These results constitute an important contribution towards a wide range of applications in the field of cognitive brain–machine interfaces. Monitoring mental processes related to attention and WM load to reduce the risk of committing errors in high-risk environments could potentially prevent many devastating consequences or using the continuous measure as neurofeedback opens up new possibilities to develop novel rehabilitation techniques for individuals with degraded WM capacity.

  3. Supramolecular interaction of methotrexate with cucurbit[7]uril and analytical application

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Xia; Zhang, Xiang-Mei; Duan, Xue-Chao; Liu, Fan; Du, Li-Ming

    2017-08-01

    The supramolecular interaction between cucurbit[7]uril (CB[7]) as the host and the anti-cancer drug methotrexate (MTX) as the guest was studied using fluorescence spectroscopy, UV-visible absorption spectroscopy, 1H NMR, 2D NOESY, and theoretical calculations. The experimental results confirmed the formation of 1:2 inclusion complex with CB[7] and indicated a simple and sensitive competitive method for the fluorescence detection of MTX. It was found that the fluorescence intensities of CB[7]-palmatine, CB[7]-berberine and CB[7]-coptisine were quenched linearly upon the addition of MTX. The linear ranges obtained in the detection of MTX were 0.1-15 μg mL- 1, 0.2-15 μg mL- 1, and 0.4-15 μg mL- 1 with detection limits of 0.03 μg mL-1, 0.06 μg mL-1, and 0.13 μg mL-1, respectively. This method can be used for the determination of MTX in biological fluids. These results suggested that cucurbit[7]uril is a promising drug carrier for targeted MTX delivery and monitoring, with improved efficacy and reduced toxicity in normal tissues.

  4. Stimulation of muscle protein synthesis by somatotropin in pigs is independent of the somatotropin-induced increase in circulating insulin.

    PubMed

    Wilson, Fiona A; Orellana, Renán A; Suryawan, Agus; Nguyen, Hanh V; Jeyapalan, Asumthia S; Frank, Jason; Davis, Teresa A

    2008-07-01

    Chronic treatment of growing pigs with porcine somatotropin (pST) promotes protein synthesis and doubles postprandial levels of insulin, a hormone that stimulates translation initiation. This study aimed to determine whether the pST-induced increase in skeletal muscle protein synthesis was mediated through an insulin-induced stimulation of translation initiation. After 7-10 days of pST (150 microg x kg(-1) x day(-1)) or control saline treatment, pancreatic glucose-amino acid clamps were performed in overnight-fasted pigs to reproduce 1) fasted (5 microU/ml), 2) fed control (25 microU/ml), and 3) fed pST-treated (50 microU/ml) insulin levels while glucose and amino acids were maintained at baseline fasting levels. Fractional protein synthesis rates and indexes of translation initiation were examined in skeletal muscle. Effectiveness of pST treatment was confirmed by reduced urea nitrogen and elevated insulin-like growth factor I levels in plasma. Skeletal muscle protein synthesis was independently increased by both insulin and pST. Insulin increased the phosphorylation of protein kinase B and the downstream effectors of the mammalian target of rapamycin, ribosomal protein S6 kinase, and eukaryotic initiation factor (eIF)4E-binding protein-1 (4E-BP1). Furthermore, insulin reduced inactive 4E-BP1.eIF4E complex association and increased active eIF4E.eIF4G complex formation, indicating enhanced eIF4F complex assembly. However, pST treatment did not alter translation initiation factor activation. We conclude that the pST-induced stimulation of skeletal muscle protein synthesis in growing pigs is independent of the insulin-associated activation of translation initiation.

  5. Stimulation of muscle protein synthesis by somatotropin in pigs is independent of the somatotropin-induced increase in circulating insulin

    PubMed Central

    Wilson, Fiona A.; Orellana, Renán A.; Suryawan, Agus; Nguyen, Hanh V.; Jeyapalan, Asumthia S.; Frank, Jason; Davis, Teresa A.

    2008-01-01

    Chronic treatment of growing pigs with porcine somatotropin (pST) promotes protein synthesis and doubles postprandial levels of insulin, a hormone that stimulates translation initiation. This study aimed to determine whether the pST-induced increase in skeletal muscle protein synthesis was mediated through an insulin-induced stimulation of translation initiation. After 7–10 days of pST (150 μg·kg−1·day−1) or control saline treatment, pancreatic glucose-amino acid clamps were performed in overnight-fasted pigs to reproduce 1) fasted (5 μU/ml), 2) fed control (25 μU/ml), and 3) fed pST-treated (50 μU/ml) insulin levels while glucose and amino acids were maintained at baseline fasting levels. Fractional protein synthesis rates and indexes of translation initiation were examined in skeletal muscle. Effectiveness of pST treatment was confirmed by reduced urea nitrogen and elevated insulin-like growth factor I levels in plasma. Skeletal muscle protein synthesis was independently increased by both insulin and pST. Insulin increased the phosphorylation of protein kinase B and the downstream effectors of the mammalian target of rapamycin, ribosomal protein S6 kinase, and eukaryotic initiation factor (eIF)4E-binding protein-1 (4E-BP1). Furthermore, insulin reduced inactive 4E-BP1·eIF4E complex association and increased active eIF4E·eIF4G complex formation, indicating enhanced eIF4F complex assembly. However, pST treatment did not alter translation initiation factor activation. We conclude that the pST-induced stimulation of skeletal muscle protein synthesis in growing pigs is independent of the insulin-associated activation of translation initiation. PMID:18460595

  6. Image interpolation used in three-dimensional range data compression.

    PubMed

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  7. Acquiring the Complex English Orthography: A Triliteracy Advantage?

    ERIC Educational Resources Information Center

    Kahn-Horwitz, Janina; Schwartz, Mila; Share, David

    2011-01-01

    The "script-dependence hypothesis" was tested through the examination of the impact of Russian and Hebrew literacy on English orthographic knowledge needed for spelling and decoding among fifth graders. We compared the performance of three groups: Russian-Hebrew-speaking emerging triliterates, Russian-Hebrew-speaking emerging biliterates who were…

  8. High-throughput illumina strand-specific RNA sequencing library preparation

    USDA-ARS?s Scientific Manuscript database

    Conventional Illumina RNA-Seq does not have the resolution to decode the complex eukaryote transcriptome due to the lack of RNA polarity information. Strand-specific RNA sequencing (ssRNA-Seq) can overcome these limitations and as such is better suited for genome annotation, de novo transcriptome as...

  9. How Can I Help My Struggling Readers?

    ERIC Educational Resources Information Center

    Duke, Nell K.; Pressley, Michael

    2005-01-01

    The reasons some children struggle with reading are as varied as the children themselves. From trouble decoding words to problems retaining information, reading difficulties are complex. All kids, says the International Reading Association, "have a right to instruction designed with their specific needs in mind." The question is how to identify…

  10. Field Day: A Case Study examining scientists’ oral performance skills

    USDA-ARS?s Scientific Manuscript database

    Communication is a complex cyclic process wherein senders and receivers encode and decode information in an effort to reach a state of mutuality or mutual understanding. When the communication of scientific or technical information occurs in a public space, effective speakers follow a formula for co...

  11. Eliminating ambiguity in digital signals

    NASA Technical Reports Server (NTRS)

    Weber, W. J., III

    1979-01-01

    Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.

  12. Decoding the ecological function of accessory genome

    USDA-ARS?s Scientific Manuscript database

    Shiga toxin-producing Escherichia coli O157:H7 primarily resides in cattle asymptomatically, and can be transmitted to humans through food. A study by Lupolova et al applied a machine-learning approach to complex pan-genome information and predicted that only a small subset of bovine isolates have t...

  13. Evaluation of radioprotective activities Rhodiola imbricata Edgew--a high altitude plant.

    PubMed

    Arora, Rajesh; Chawla, Raman; Sagar, Ravinder; Prasad, Jagdish; Singh, Surendar; Kumar, Raj; Sharma, Ashok; Singh, Shikha; Sharma, Rakesh Kumar

    2005-05-01

    The present study reports the radioprotective properties of a hydro-alcoholic rhizome extract of Rhodiola imbricata (code named REC-7004), a plant native to the high-altitude Himalayas. The radioprotective effect, along with its relevant superoxide ion scavenging, metal chelation, antioxidant, anti-lipid peroxidation and anti-hemolytic activities was evaluated under both in vitro and in vivo conditions. Chemical analysis showed the presence of high content of polyphenolics (0.971 +/- 0.01 mg% of quercetin). Absorption spectra analysis revealed constituents that absorb in the range of 220-290 nm, while high-performance liquid chromatography (HPLC) analysis confirmed the presence of four major peaks with retention times of 4.780, 5.767, 6.397 and 7.577 min. REC-7004 was found to lower lipid oxidation significantly (p < 0.05) at concentrations viz., 8 and 80 microg/ml respectively as compared to reduced glutathione, although the optimally protective dose was 80 microg/ml, which showed 59.5% inhibition of induction of linoleic acid degradation within first 24 h. The metal chelation activity of REC-7004 was found to increase concomitantly from 1 to 50 microg/ml. REC-7004 (10-50 microg/ml) exhibited significant metal chelation activity (p < 0.05), as compared to control, and maximum percentage inhibition (30%) of formation of iron-2,2'-bi-pyridyl complex was observed at 50 microg/ml, which correlated well with quercetin (34.9%), taken as standard. The reducing power of REC-7004 increased in a dose-dependent manner. The absorption unit value of REC-7004 was significantly lower (0.0183 +/- 0.0033) as compared to butylated hydroxy toluene, a standard antioxidant (0.230 +/- 0.091), confirming its high reducing ability. Superoxide ion scavenging ability of REC-7004 exhibited a dose-dependent increase (1-100 microg/ml) and was significantly higher (p < 0.05) than that of quercetin at lower concentrations (1-10 microg/ml), while at 100 microg/ml, both quercetin and REC-7004 scavenged over 90% superoxide anions. MTT assay in U87 cell line revealed an increase in percent survival of cells at doses between 25 and 125 microg/ml in case of drug + radiation group. In vivo evaluation of radio-protective efficacy in mice revealed that intraperitoneal administration of REC-7004 (maximally effective dose: 400 mg/kg b.w.) 30 min prior to lethal (10 Gy) total-body gamma-irradiation rendered 83.3% survival. The ability of REC-7004 to inhibit lipid peroxidation induced by iron/ascorbate, radiation (250 Gy) and their combination [i.e., iron/ascorbate and radiation (250 Gy)], was also investigated and was found to decrease in a dose-dependent manner (0.05-2 mg/ml). The maximum percent inhibition of formation of MDA-TBA complex at 2 mg/ml in case of iron/ascorbate, radiation (250 Gy) and both i.e., iron/ascorbate with radiation (250 Gy) was 53.78, 63.07, and 51.76% respectively and were found to be comparable to that of quercetin. REC-7004 (1 microg/ml) also exhibited significant anti-hemolytic capacity by preventing radiation-induced membrane degeneration of human erythrocytes. In conclusion, Rhodiola renders in vitro and in vivo radioprotection via multifarious mechanisms that act in a synergistic manner.

  14. State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements

    PubMed Central

    Mollazadeh, Mohsen; Davidson, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2013-01-01

    The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation. PMID:23536714

  15. Alexithymia and the Processing of Emotional Facial Expressions (EFEs): Systematic Review, Unanswered Questions and Further Perspectives

    PubMed Central

    Grynberg, Delphine; Chang, Betty; Corneille, Olivier; Maurage, Pierre; Vermeulen, Nicolas

    2012-01-01

    Alexithymia is characterized by difficulties in identifying, differentiating and describing feelings. A high prevalence of alexithymia has often been observed in clinical disorders characterized by low social functioning. This review aims to assess the association between alexithymia and the ability to decode emotional facial expressions (EFEs) within clinical and healthy populations. More precisely, this review has four main objectives: (1) to assess if alexithymia is a better predictor of the ability to decode EFEs than the diagnosis of clinical disorder; (2) to assess the influence of comorbid factors (depression and anxiety disorder) on the ability to decode EFE; (3) to investigate if deficits in decoding EFEs are specific to some levels of processing or task types; (4) to investigate if the deficits are specific to particular EFEs. Twenty four studies (behavioural and neuroimaging) were identified through a computerized literature search of Psycinfo, PubMed, and Web of Science databases from 1990 to 2010. Data on methodology, clinical characteristics, and possible confounds were analyzed. The review revealed that: (1) alexithymia is associated with deficits in labelling EFEs among clinical disorders, (2) the level of depression and anxiety partially account for the decoding deficits, (3) alexithymia is associated with reduced perceptual abilities, and is likely to be associated with impaired semantic representations of emotional concepts, and (4) alexithymia is associated with neither specific EFEs nor a specific valence. These studies are discussed with respect to processes involved in the recognition of EFEs. Future directions for research on emotion perception are also discussed. PMID:22927931

  16. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Bayesian decoding using unsorted spikes in the rat hippocampus

    PubMed Central

    Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.

    2013-01-01

    A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403

  18. Study report recommendations for the next generation Range Safety System (RSS) Integrated Receiver/Decoder (IRD)

    NASA Technical Reports Server (NTRS)

    Crosby, Robert H.

    1992-01-01

    The Integrated Receiver/Decoder (IRD) currently used on the Space Shuttle was designed in the 1980 and prior time frame. Over the past 12 years, several parts have become obsolete or difficult to obtain. As directed by the Marshall Space Flight Center, a primary objective is to investigate updating the IRD design using the latest technology subsystems. To take advantage of experience with the current designs, an analysis of failures and a review of discrepancy reports, material review board actions, scrap, etc. are given. A recommended new design designated as the Advanced Receiver/Decoder (ARD) is presented. This design uses the latest technology components to simplify circuits, improve performance, reduce size and cost, and improve reliability. A self-test command is recommended that can improve and simplify operational procedures. Here, the new design is contrasted with the old. Possible simplification of the total Range Safety System is discussed, as is a single-step crypto technique that can improve and simplify operational procedures.

  19. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  20. Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    PubMed Central

    Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping

    2017-01-01

    Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed algorithms. Our findings provide new insights for extending current BMI design concepts and techniques on upper limbs to lower limb circumstances. Brain controlled exoskeleton, prostheses or neuromuscular electrical stimulators for lower limbs are expected to be developed, which enables the subject to manipulate complex biomechatronic devices with mind in more harmonized manner. PMID:28223914

  1. Translating the “Banana Genome” to Delineate Stress Resistance, Dwarfing, Parthenocarpy and Mechanisms of Fruit Ripening

    PubMed Central

    Dash, Prasanta K.; Rai, Rhitu

    2016-01-01

    Evolutionary frozen, genetically sterile and globally iconic fruit “Banana” remained untouched by the green revolution and, as of today, researchers face intrinsic impediments for its varietal improvement. Recently, this wonder crop entered the genomics era with decoding of structural genome of double haploid Pahang (AA genome constitution) genotype of Musa acuminata. Its complex genome decoded by hybrid sequencing strategies revealed panoply of genes and transcription factors involved in the process of sucrose conversion that imparts sweetness to its fruit. Historically, banana has faced the wrath of pandemic bacterial, fungal, and viral diseases and multitude of abiotic stresses that has ruined the livelihood of small/marginal farmers’ and destroyed commercial plantations. Decoding structural genome of this climacteric fruit has given impetus to a deeper understanding of the repertoire of genes involved in disease resistance, understanding the mechanism of dwarfing to develop an ideal plant type, unraveling the process of parthenocarpy, and fruit ripening for better fruit quality. Further, injunction of comparative genomics will usher in integration of information from its decoded genome and other monocots into field applications in banana related but not limited to yield enhancement, food security, livelihood assurance, and energy sustainability. In this mini review, we discuss pre- and post-genomic discoveries and highlight accomplishments in structural genomics, genetic engineering and forward genetic accomplishments with an aim to target genes and transcription factors for translational research in banana. PMID:27833619

  2. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    PubMed Central

    Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724

  3. An LDPC decoder architecture for wireless sensor network applications.

    PubMed

    Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.

  4. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  5. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  6. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... decoders manufactured after August 1, 2003 must provide a means to permit the selective display and logging... upgrade their decoders on an optional basis to include a selective display and logging capability for EAS... decoders after February 1, 2004 must install decoders that provide a means to permit the selective display...

  7. A real-time MPEG software decoder using a portable message-passing library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  8. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  9. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  10. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  11. Evolution of Protein Synthesis from an RNA World

    PubMed Central

    Noller, Harry F.

    2012-01-01

    SUMMARY Because of the molecular complexity of the ribosome and protein synthesis, it is a challenge to imagine how translation could have evolved from a primitive RNA World. Two specific suggestions are made here to help to address this, involving separate evolution of the peptidyl transferase and decoding functions. First, it is proposed that translation originally arose not to synthesize functional proteins, but to provide simple (perhaps random) peptides that bound to RNA, increasing its available structure space, and therefore its functional capabilities. Second, it is proposed that the decoding site of the ribosome evolved from a mechanism for duplication of RNA. This process involved homodimeric “duplicator RNAs,” resembling the anticodon arms of tRNAs, which directed ligation of trinucleotides in response to an RNA template. PMID:20610545

  12. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  13. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints.

    PubMed

    Kostal, Lubomir; Kobayashi, Ryota

    2015-10-01

    Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Form + Theme + Context: Balancing Considerations for Meaningful Art Learning

    ERIC Educational Resources Information Center

    Sandell, Renee

    2006-01-01

    Today's students need visual literacy skills and knowledge that enable them to encode concepts as well as decode the meaning of society's images, ideas, and media of the past as well as the increasingly complex visual world. In this article, the author discusses how art teachers can help students understand the increasingly visual/material…

  15. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  16. Maya: A Simulation of Mayan Civilization during the Seventh Century.

    ERIC Educational Resources Information Center

    Roth, Peter

    This simulation allows students to explore the lives of the great rulers of the Mayan culture. Students learn the mysterious history of the Maya by decoding glyphs, investigating the unusual religion of the Maya, unraveling the complex Mayan calendar, and discovering the Mayan number system's secret meanings. Specific cooperation skills are taught…

  17. The Most Frequent Metacognitive Strategies Used in Reading Comprehension among ESP Learners

    ERIC Educational Resources Information Center

    Khoshsima, Hooshang; Samani, Elham Amiri

    2015-01-01

    Reading strategies are plans for solving problems encountered during reading while learners are deeply engage with the text. So, comprehension is not a simple decoding of symbols, but a complex multidimensional process in which the leaner draws on previous schemata applying strategies consciously. In fact, metacognitive strategies are accessible…

  18. Learner Variables Associated with Reading and Learning in a Hypertext Environment.

    ERIC Educational Resources Information Center

    Niederhauser, Dale S.; Shapiro, Amy

    While many elements like character decoding, word recognition, comprehension, and others remain the same as in learning from traditional text, when learning from hypertext, a number of features that are unique to reading hypertext produce added complexity. It is these features that drive research on hypertext in education. There is a greater…

  19. Neural Strategies for Reading Japanese and Chinese Sentences: A Cross-Linguistic fMRI Study of Character-Decoding and Morphosyntax

    ERIC Educational Resources Information Center

    Huang, Koongliang; Itoh, Kosuke; Kwee, Ingrid L.; Nakada, Tsutomu

    2012-01-01

    Japanese and Chinese share virtually identical morphographic characters invented in ancient China. Whereas modern Chinese retained the original morphographic functionality of these characters (hanzi), modern Japanese utilizes these characters (kanji) as complex syllabograms. This divergence provides a unique opportunity to systematically…

  20. Maximum Likelihood Detection of Low Rate Repeat Codes in Frequency Hopped Systems

    DTIC Science & Technology

    2013-04-01

    Communications, vol. 33, pp. 385 – 393, May 1985. [4] W. Sweldens, “Fast block noncoherent decoding,” IEEE Commun. Lett., vol. 5, pp. 132–134, Apr...2001. [5] I. Motedayen-Aval and A. Anastasopoulos, “Polynomial- complexity noncoherent symbol-by-symbol detection with application to adaptive

  1. Decoding the dynamics of cellular metabolism and the action of 3-bromopyruvate and 2-deoxyglucose using pulsed stable isotope-resolved metabolomics.

    PubMed

    Pietzke, Matthias; Zasada, Christin; Mudrich, Susann; Kempa, Stefan

    2014-01-01

    Cellular metabolism is highly dynamic and continuously adjusts to the physiological program of the cell. The regulation of metabolism appears at all biological levels: (post-) transcriptional, (post-) translational, and allosteric. This regulatory information is expressed in the metabolome, but in a complex manner. To decode such complex information, new methods are needed in order to facilitate dynamic metabolic characterization at high resolution. Here, we describe pulsed stable isotope-resolved metabolomics (pSIRM) as a tool for the dynamic metabolic characterization of cellular metabolism. We have adapted gas chromatography-coupled mass spectrometric methods for metabolomic profiling and stable isotope-resolved metabolomics. In addition, we have improved robustness and reproducibility and implemented a strategy for the absolute quantification of metabolites. By way of examples, we have applied this methodology to characterize central carbon metabolism of a panel of cancer cell lines and to determine the mode of metabolic inhibition of glycolytic inhibitors in times ranging from minutes to hours. Using pSIRM, we observed that 2-deoxyglucose is a metabolic inhibitor, but does not directly act on the glycolytic cascade.

  2. Energetics of codon-anticodon recognition on the small ribosomal subunit.

    PubMed

    Almlöf, Martin; Andér, Martin; Aqvist, Johan

    2007-01-09

    Recent crystal structures of the small ribosomal subunit have made it possible to examine the detailed energetics of codon recognition on the ribosome by computational methods. The binding of cognate and near-cognate anticodon stem loops to the ribosome decoding center, with mRNA containing the Phe UUU and UUC codons, are analyzed here using explicit solvent molecular dynamics simulations together with the linear interaction energy (LIE) method. The calculated binding free energies are in excellent agreement with experimental binding constants and reproduce the relative effects of mismatches in the first and second codon position versus a mismatch at the wobble position. The simulations further predict that the Leu2 anticodon stem loop is about 10 times more stable than the Ser stem loop in complex with the Phe UUU codon. It is also found that the ribosome significantly enhances the intrinsic stability differences of codon-anticodon complexes in aqueous solution. Structural analysis of the simulations confirms the previously suggested importance of the universally conserved nucleotides A1492, A1493, and G530 in the decoding process.

  3. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  4. Pyrus pashia: A persuasive source of natural antioxidants.

    PubMed

    Siddiqui, Sabahat Zahra; Ali, Saima; Rehman, Azizur; Rubab, Kaniz; vAbbasi, Muhammad Athar; Ajaib, Muhammad; Z Rasool, Zahid Ghulam

    2015-09-01

    Pyrus pashia Buch. & Ham. was subjected to extraction with methanol. Methanolic extracts of fruit, bark and leaf were partitioned separately with four organic solvents in order of increasing polarity, asn-hexane, chloroform, ethyl acetate and n-butanol after dissolving in distilled water. Phytochemical screening revealed the presence of phenolics, flavonoides, alkaloids and cardiac glycosides in large amount in chloroform, ethyl acetate and n-butanol soluble fractions. The antioxidant activity of crude methanolic extracts, all the obtained fourorganic fractions and remaining aqueous fractions was evaluated by different methods such as: 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging activity, ferric reducing antioxidant power (FRAP) assay and total antioxidant activity by phosphomolybdenum complex method as well as determination of total phenolics. The results of antioxidant activity exhibited that chloroform soluble fraction of fruit showed the highest value of percent inhibition of DPPH (48.16 ± 0.21 μg/ml) at the concentration of 10 μg/ml. Ethyl acetate soluble fraction displayed the lowest antioxidant activity having IC50 value of bark as (8.64 ± 0.32 μg/ml) relative to butylated hydroxytoluene (BHT), having IC50 of 12.1 ± 0.92 μg/ml. The ethyl acetate soluble fraction of bark revealed the highest FRAPs value (174.618 ± 0.11TE µM/ml) among all the three parts. This fraction also showed the highest value of total antioxidant activity as (1.499 ± 0.90), determined by phosphomolybdenum complex method. Moreover, this fraction also conferred the highest phenolic content (393.19 ± 0.72) as compared to other studied fractions of fruit and leaf.

  5. Lossless data compression for improving the performance of a GPU-based beamformer.

    PubMed

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  6. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  7. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  8. Efficient Decoding of Compressed Data.

    ERIC Educational Resources Information Center

    Bassiouni, Mostafa A.; Mukherjee, Amar

    1995-01-01

    Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)

  9. A new VLSI architecture for a single-chip-type Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.

    1989-01-01

    A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.

  10. Deconstructing multivariate decoding for the study of brain function.

    PubMed

    Hebart, Martin N; Baker, Chris I

    2017-08-04

    Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.

  11. Blocking reduction of Landsat Thematic Mapper JPEG browse images using optimal PSNR estimated spectra adaptive postfiltering

    NASA Technical Reports Server (NTRS)

    Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.

    1994-01-01

    Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.

  12. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  13. New spectrophotometric methods for the determinations of hydrogen sulfide present in the samples of lake water, industrial effluents, tender coconut, sugarcane juice and egg

    NASA Astrophysics Data System (ADS)

    Shyla, B.; Nagendrappa, G.

    2012-10-01

    The new methods are working on the principle that iron(III) is reduced to iron(II) by hydrogen sulfide, catechol and p-toluidine the system 1/hydrogen sulfide the system 2, in acidic medium followed by the reduced iron forming complex with 1,10-phenanthroline with λmax 510 nm. The other two methods are based on redox reactions between electrolytically generated manganese(III) sulfate taken in excess and hydrogen sulfide followed by the unreacted oxidant oxidizing diphenylamine λmax 570 the system 3/barium diphenylamine sulphonate λmax 540 nm, the system 4. The increase/decrease in the color intensity of the dye products of the systems 1 and 2 or 3 and 4 are proportional to the concentration of hydrogen sulfide with its quantification range 0.035-1.40 μg ml-1/0.14-1.40 μg ml-1.

  14. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  15. Systolic VLSI Reed-Solomon Decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.

    1986-01-01

    Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.

  16. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  17. A test of the role of the medial temporal lobe in single-word decoding.

    PubMed

    Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph

    2011-01-15

    The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. A Test of the Role of the Medial Temporal Lobe in Single-Word Decoding

    PubMed Central

    Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph

    2012-01-01

    The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus, nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. PMID:20884357

  19. Advanced modulation technology development for earth station demodulator applications. Coded modulation system development

    NASA Technical Reports Server (NTRS)

    Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.

    1990-01-01

    A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.

  20. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  1. Comparative analysis of toxin detection in biological and enviromental samples

    NASA Astrophysics Data System (ADS)

    Ogert, Robert A.; Burans, James; O'Brien, Tom; Ligler, Frances S.

    1994-03-01

    The basic recognition schemes underlying the principles of standard enzyme-linked immunosorbent assay (ELISA) and radioimmunoassay (RIA) protocols are increasingly being adapted for use with new detection devices. A direct comparison was made using a fiber optic biosensor that employs evanescent wave detection and an ELISA using avidin-biotin. The assays were developed for the detection of Ricinus communis agglutinin II, also known as ricin or RCA60. Detection limits between the two methods were comparable for ricin in phosphate buffered saline (PBS), however results in complex samples differed slightly. In PBS, sensitivity for ricin was 1 ng/ml using the fiber optic device and 500 pg/ml using the ELISA. The fiber optic sensor could not detect ricin directly in urine or serum spiked with 5 ng/ml ricin, however, the ELISA showed detection but at reduced levels to the PBS control.

  2. Belief propagation decoding of quantum channels by passing quantum messages

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  3. Perfect quantum multiple-unicast network coding protocol

    NASA Astrophysics Data System (ADS)

    Li, Dan-Dan; Gao, Fei; Qin, Su-Juan; Wen, Qiao-Yan

    2018-01-01

    In order to realize long-distance and large-scale quantum communication, it is natural to utilize quantum repeater. For a general quantum multiple-unicast network, it is still puzzling how to complete communication tasks perfectly with less resources such as registers. In this paper, we solve this problem. By applying quantum repeaters to multiple-unicast communication problem, we give encoding-decoding schemes for source nodes, internal ones and target ones, respectively. Source-target nodes share EPR pairs by using our encoding-decoding schemes over quantum multiple-unicast network. Furthermore, quantum communication can be accomplished perfectly via teleportation. Compared with existed schemes, our schemes can reduce resource consumption and realize long-distance transmission of quantum information.

  4. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward

    PubMed Central

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios

    2014-01-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408

  5. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward.

    PubMed

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam

    2014-10-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.

  6. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  7. Multiuser signal detection using sequential decoding

    NASA Astrophysics Data System (ADS)

    Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.

    1990-05-01

    The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.

  8. Visual perception as retrospective Bayesian decoding from high- to low-level features

    PubMed Central

    Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning

    2017-01-01

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108

  9. A Scope of Learner Behaviors in Reading.

    ERIC Educational Resources Information Center

    Lake Washington School District 414, Kirkland, WA.

    This guide reflects the definition of reading as a complex intellectual act involving a variety of behaviors to decode and comprehend printed symbols and is intended to help the teacher be aware of all the skills within each dimension of the reading act. The reading skills are described in terms of learner behavior and a criterion reference is…

  10. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  11. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  12. Modeling of catalytically active metal complex species and intermediates in reactions of organic halides electroreduction.

    PubMed

    Lytvynenko, Anton S; Kolotilov, Sergey V; Kiskin, Mikhail A; Eremenko, Igor L; Novotortsev, Vladimir M

    2015-02-28

    The results of quantum chemical modeling of organic and metal-containing intermediates that occur in electrocatalytic dehalogenation reactions of organic chlorides are presented. Modeling of processes that take place in successive steps of the electrochemical reduction of representative C1 and C2 chlorides - CHCl3 and Freon R113 (1,1,2-trifluoro-1,2,2-trichloroethane) - was carried out by density functional theory (DFT) and second-order Møller-Plesset perturbation theory (MP2). It was found that taking solvation into account using an implicit solvent model (conductor-like screening model, COSMO) or considering explicit solvent molecules gave similar results. In addition to modeling of simple non-catalytic dehalogenation, processes with a number of complexes and their reduced forms, some of which were catalytically active, were investigated by DFT. Complexes M(L1)2 (M = Fe, Co, Ni, Cu, Zn, L1H = Schiff base from 2-pyridinecarbaldehyde and the hydrazide of 4-pyridinecarboxylic acid), Ni(L2) (H2L2 is the Schiff base from salicylaldehyde and 1,2-ethylenediamine, known as salen) and Co(L3)2Cl2, representing a fragment of a redox-active coordination polymer [Co(L3)Cl2]n (L3 is the dithioamide of 1,3-benzenedicarboxylic acid), were considered. Gradual changes in electronic structure in a series of compounds M(L1)2 were observed, and correlations between [M(L1)2](0) spin-up and spin-down LUMO energies and the relative energies of the corresponding high-spin and low-spin reduced forms, as well as the shape of the orbitals, were proposed. These results can be helpful for determination of the nature of redox-processes in similar systems by DFT. No specific covalent interactions between [M(L1)2](-) and the R113 molecule (M = Fe, Co, Ni, Zn) were found, which indicates that M(L1)2 electrocatalysts act rather like electron transfer mediators via outer-shell electron transfer. A relaxed surface scan of the adducts {M(L1)2·R113}(-) (M = Ni or Co) versus the distance between the chlorine atom leaving during reduction and the corresponding carbon atom showed an energy barrier to electron transfer (the first stage of R113 catalytic reduction), while DFT optimization of the {Ni(L2)·R113}(-) adduct showed barrier-free decomposition. The difference between the stabilities of the {Ni(L1)2·R113}(-) and {Ni(L2)·R113}(-) adducts correlates with the difference between the catalytic activities of Ni(L1)2 and Ni(L2) in the electrochemical reduction of R113.

  13. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  14. Enhanced decoding for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1993-01-01

    A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.

  15. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  16. Encoding and Decoding Models in Cognitive Electrophysiology

    PubMed Central

    Holdgraf, Christopher R.; Rieger, Jochem W.; Micheli, Cristiano; Martin, Stephanie; Knight, Robert T.; Theunissen, Frederic E.

    2017-01-01

    Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses. PMID:29018336

  17. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data.

    PubMed

    Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A

    2017-04-01

    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.

  18. Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications

    NASA Astrophysics Data System (ADS)

    Mirkovic, Bojana; Debener, Stefan; Jaeger, Manuela; De Vos, Maarten

    2015-08-01

    Objective. Recent studies have provided evidence that temporal envelope driven speech decoding from high-density electroencephalography (EEG) and magnetoencephalography recordings can identify the attended speech stream in a multi-speaker scenario. The present work replicated the previous high density EEG study and investigated the necessary technical requirements for practical attended speech decoding with EEG. Approach. Twelve normal hearing participants attended to one out of two simultaneously presented audiobook stories, while high density EEG was recorded. An offline iterative procedure eliminating those channels contributing the least to decoding provided insight into the necessary channel number and optimal cross-subject channel configuration. Aiming towards the future goal of near real-time classification with an individually trained decoder, the minimum duration of training data necessary for successful classification was determined by using a chronological cross-validation approach. Main results. Close replication of the previously reported results confirmed the method robustness. Decoder performance remained stable from 96 channels down to 25. Furthermore, for less than 15 min of training data, the subject-independent (pre-trained) decoder performed better than an individually trained decoder did. Significance. Our study complements previous research and provides information suggesting that efficient low-density EEG online decoding is within reach.

  19. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  20. Decoding Facial Expressions: A New Test with Decoding Norms.

    ERIC Educational Resources Information Center

    Leathers, Dale G.; Emigh, Ted H.

    1980-01-01

    Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)

  1. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.

    PubMed

    Prins, Noeline W; Sanchez, Justin C; Prasad, Abhishek

    2017-06-01

    For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor's weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the 'ambiguous' region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.

  2. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics

    NASA Astrophysics Data System (ADS)

    Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek

    2017-06-01

    Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. Main results. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. Significance: The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.

  3. Edge-Related Activity Is Not Necessary to Explain Orientation Decoding in Human Visual Cortex.

    PubMed

    Wardle, Susan G; Ritchie, J Brendan; Seymour, Kiley; Carlson, Thomas A

    2017-02-01

    Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether "edge-related activity" underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding. Copyright © 2017 the authors 0270-6474/17/371187-10$15.00/0.

  4. Visual perception as retrospective Bayesian decoding from high- to low-level features.

    PubMed

    Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning

    2017-10-24

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.

  5. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  6. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  7. A prospective randomised study of alginate-drenched low stretch bandages as an alternative to conventional lymphologic compression bandaging.

    PubMed

    Kasseroller, Renato G; Brenner, Erich

    2010-03-01

    Breast-cancer-related lymphoedema, either caused by the tumour itself or its therapy, can be found in approximately 24% of all patients. It results in disabilities, psychological distress and reduced quality of life. Therefore, proper therapy for this entity is very important. Guidelines recommend a therapy in two phases, an intensive phase I for 3 weeks for volume reduction and, between the cycles of phase I, a reduced phase II to maintain the result. During phase I therapy, manual lymphatic drainage often cannot be administered on weekends or holidays; only a reduced therapy, mainly by application of a more or less passive compression by bandaging, is administered. For this, conventional low-stretch bandages are hitherto being used. Several attempts have been made to overcome this disadvantage by either impregnating or covering the bandage with sticky or adhesive substances such as india rubber, elastomeres, polyacrylates, etc. Recently, new bandages are available, which are drenched with alginate that becomes semi-rigid after drying for approximately 6 h. It was the aim of this study to compare alginate bandaging to a conventional lymphologic-multilayered low-stretch bandaging with individual supportive lining as to their effect concerning their congestive capacity in exactly delimited time periods of reduced decongestive therapy as well as the patients' tolerance. From December 2007 until May 2008, 61 female patients with a one-sided lymphoedema of the axillary tributary region after axillar dissection who underwent a phase I complex decongestive therapy were prospectively selected for our investigation. On weekends, group A got the conventional low-stretch compressive bandaging, whereas group B got an alginate semi-rigid bandage. Arm volumes were measured before and after these bandages were applied. Additionally, the subjective sensations of the skin caused by the compression were measured by means of a five-level Likert scale. The initial volumes (V (0)) of the two groups (A, 2,939.0 ml +/- 569.182; B, 3,062.6 ml +/- 539.161) varied within the same magnitude, with somewhat smaller values in group A. The same was true for the final volumes (V (6)), measured at day 22 (A, 2,674.5 ml +/- 480.427; B, 2,740.1 ml +/- 503.593). During the weekends, the arm volumes re-increased (first weekend: A, 16.4 ml vs. B, 4.7 ml; second weekend: A, 14.2 ml vs. B, 2.7 ml; third weekend: A, 7.5 ml vs. B, 1.1 ml). A significantly smaller volume increase appeared in the alginate group during the weekends. There were no serious side effects in both groups. Concerning the patients' comfort, the values of the alginate group were clearly better than those of the conventionally bandaged group. Additionally, the volume changes in the alginate group revealed fewer fluctuations. As a summary, one can state that a good alternative to the conventional bandaging is available with the alginate bandages, bringing distinct advantages for the patients when administered properly.

  8. [EFFICACY OF CYTOFLAVIN IN COMPLEX TREATMENT OF DIABETIC FOOT SYNDROME].

    PubMed

    Skrypko, V; Kovalenko, A; Zaplutanov, V; Kharitonova, T; Myhaloyko, I

    2017-04-01

    The study involved 97 patients with severe diabetic foot syndrome (DFS) subcompensated type 2 diabetes. All patients were available mediacalcification foot and lower leg arteries of different severity. Depending on the treatment, all patients were divided into 2 groups by stratified randomization. The І group received standard therapy, which is indicated for the DFS. A ІІ group of patients additionally received basic therapy drug Cytoflavin 10 ml 0,9% NaCl 200 ml for 10 days, followed by transfer to tablet form Cytoflavin 2 tablets 2 times per day orally for one month. We noted a positive trend of treatment of patients who, in addition to standard therapy received the drug Cytoflavin. Thus, the use of complex surgical treatment of patients with mixed form of DFS Cytoflavin reduces the severity of distal polyneuropathy, improves oxygenation of tissues and restores the enzyme activity of antioxidant system, that manifested neuroprotective, antioxidant and anti-hypoxic effects of drugs, which substantiates the indications for its use in the this pathology.

  9. Decoding and Encoding Facial Expressions in Preschool-Age Children.

    ERIC Educational Resources Information Center

    Zuckerman, Miron; Przewuzman, Sylvia J.

    1979-01-01

    Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…

  10. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  11. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  12. Preparation of Radiolabeled Compounds for the U.S. Army Drug Development Program.

    DTIC Science & Technology

    1996-12-01

    with borane -THF complex gave an 83% radiochemical yield of [14C]-19 after several recrystallizations from ethanol. In the master synthesis , alcohol...amide [14C]-15 in 88% radiochemical yield and 99% radiochemical purity after chromatography. Reduction of [14C]-15 with borane -THF complex afforded a...163 mCi) and tetrahydrofuran (freshly distilled) (28.6 mL) in a 1 00-mL RBF was cooled to 0 0C by an ice-bath. Borane -tetra- hydrofuran complex (10.8 mL

  13. A software simulation study of a (255,223) Reed-Solomon encoder-decoder

    NASA Technical Reports Server (NTRS)

    Pollara, F.

    1985-01-01

    A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.

  14. Enhanced performance for differential detection in coherent Brillouin optical time-domain analysis sensors

    NASA Astrophysics Data System (ADS)

    Shao, Liyang; Zhang, Yunpeng; Li, Zonglei; Zhang, Zhiyong; Zou, Xihua; Luo, Bin; Pan, Wei; Yan, Lianshan

    2016-11-01

    Logarithmic detectors (LogDs) have been used in coherent Brillouin optical time-domain analysis (BOTDA) sensors to reduce the effect of phase fluctuation, demodulation complexities, and measurement time. However, because of the inherent properties of LogDs, a DC component at the level of hundreds of millivolts that prohibits high-gain signal amplification (SA) could be generated, resulting in unacceptable data acquisition (DAQ) inaccuracies and decoding errors in the process of prototype integration. By generating a reference light at a level similar to the probe light, differential detection can be applied to remove the DC component automatically using a differential amplifier before the DAQ process. Therefore, high-gain SA can be employed to reduce quantization errors. The signal-to-noise ratio of the weak Brillouin gain signal is improved from ˜11.5 to ˜21.8 dB. A BOTDA prototype is implemented based on the proposed scheme. The experimental results show that the measurement accuracy of the Brillouin frequency shift (BFS) is improved from ±1.9 to ±0.8 MHz at the end of a 40-km sensing fiber.

  15. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  17. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  18. Decoding the Dopamine Signal in Macaque Prefrontal Cortex: A Simulation Study Using the Cx3Dp Simulator

    PubMed Central

    Spühler, Isabelle Ayumi; Hauri, Andreas

    2013-01-01

    Dopamine transmission in the prefrontal cortex plays an important role in reward based learning, working memory and attention. Dopamine is thought to be released non-synaptically into the extracellular space and to reach distant receptors through diffusion. This simulation study examines how the dopamine signal might be decoded by the recipient neuron. The simulation was based on parameters from the literature and on our own quantified, structural data from macaque prefrontal area 10. The change in extracellular dopamine concentration was estimated at different distances from release sites and related to the affinity of the dopamine receptors. Due to the sparse and random distribution of release sites, a transient heterogeneous pattern of dopamine concentration emerges. Our simulation predicts, however, that at any point in the simulation volume there is sufficient dopamine to bind and activate high-affinity dopamine receptors. We propose that dopamine is broadcast to its distant receptors and any change from the local baseline concentration might be decoded by a transient change in the binding probability of dopamine receptors. Dopamine could thus provide a graduated ‘teaching’ signal to reinforce concurrently active synapses and cell assemblies. In conditions of highly reduced or highly elevated dopamine levels the simulations predict that relative changes in the dopamine signal can no longer be decoded, which might explain why cognitive deficits are observed in patients with Parkinson’s disease, or induced through drugs blocking dopamine reuptake. PMID:23951205

  19. HCV IRES domain IIb affects the configuration of coding RNA in the 40S subunit's decoding groove

    PubMed Central

    Filbin, Megan E.; Kieft, Jeffrey S.

    2011-01-01

    Hepatitis C virus (HCV) uses a structured internal ribosome entry site (IRES) RNA to recruit the translation machinery to the viral RNA and begin protein synthesis without the ribosomal scanning process required for canonical translation initiation. Different IRES structural domains are used in this process, which begins with direct binding of the 40S ribosomal subunit to the IRES RNA and involves specific manipulation of the translational machinery. We have found that upon initial 40S subunit binding, the stem–loop domain of the IRES that contains the start codon unwinds and adopts a stable configuration within the subunit's decoding groove. This configuration depends on the sequence and structure of a different stem–loop domain (domain IIb) located far from the start codon in sequence, but spatially proximal in the IRES•40S complex. Mutation of domain IIb results in misconfiguration of the HCV RNA in the decoding groove that includes changes in the placement of the AUG start codon, and a substantial decrease in the ability of the IRES to initiate translation. Our results show that two distal regions of the IRES are structurally communicating at the initial step of 40S subunit binding and suggest that this is an important step in driving protein synthesis. PMID:21606179

  20. HCV IRES domain IIb affects the configuration of coding RNA in the 40S subunit's decoding groove.

    PubMed

    Filbin, Megan E; Kieft, Jeffrey S

    2011-07-01

    Hepatitis C virus (HCV) uses a structured internal ribosome entry site (IRES) RNA to recruit the translation machinery to the viral RNA and begin protein synthesis without the ribosomal scanning process required for canonical translation initiation. Different IRES structural domains are used in this process, which begins with direct binding of the 40S ribosomal subunit to the IRES RNA and involves specific manipulation of the translational machinery. We have found that upon initial 40S subunit binding, the stem-loop domain of the IRES that contains the start codon unwinds and adopts a stable configuration within the subunit's decoding groove. This configuration depends on the sequence and structure of a different stem-loop domain (domain IIb) located far from the start codon in sequence, but spatially proximal in the IRES•40S complex. Mutation of domain IIb results in misconfiguration of the HCV RNA in the decoding groove that includes changes in the placement of the AUG start codon, and a substantial decrease in the ability of the IRES to initiate translation. Our results show that two distal regions of the IRES are structurally communicating at the initial step of 40S subunit binding and suggest that this is an important step in driving protein synthesis.

  1. Software package for performing experiments about the convolutionally encoded Voyager 1 link

    NASA Technical Reports Server (NTRS)

    Cheng, U.

    1989-01-01

    A software package enabling engineers to conduct experiments to determine the actual performance of long constraint-length convolutional codes over the Voyager 1 communication link directly from the Jet Propulsion Laboratory (JPL) has been developed. Using this software, engineers are able to enter test data from the Laboratory in Pasadena, California. The software encodes the data and then sends the encoded data to a personal computer (PC) at the Goldstone Deep Space Complex (GDSC) over telephone lines. The encoded data are sent to the transmitter by the PC at GDSC. The received data, after being echoed back by Voyager 1, are first sent to the PC at GDSC, and then are sent back to the PC at the Laboratory over telephone lines for decoding and further analysis. All of these operations are fully integrated and are completely automatic. Engineers can control the entire software system from the Laboratory. The software encoder and the hardware decoder interface were developed for other applications, and have been modified appropriately for integration into the system so that their existence is transparent to the users. This software provides: (1) data entry facilities, (2) communication protocol for telephone links, (3) data displaying facilities, (4) integration with the software encoder and the hardware decoder, and (5) control functions.

  2. A comparative study of prebiotic and present day translational models

    NASA Technical Reports Server (NTRS)

    Rein, R.; Raghunathan, G.; Mcdonald, J.; Shibata, M.; Srinivasan, S.

    1986-01-01

    It is generally recognized that the understanding of the molecular basis of primitive translation is a fundamental step in developing a theory of the origin of life. However, even in modern molecular biology, the mechanism for the decoding of messenger RNA triplet codons into an amino acid sequence of a protein on the ribosome is understood incompletely. Most of the proposed models for prebiotic translation lack, not only experimental support, but also a careful theoretical scrutiny of their compatibility with well understood stereochemical and energetic principles of nucleic acid structure, molecular recognition principles, and the chemistry of peptide bond formation. Present studies are concerned with comparative structural modelling and mechanistic simulation of the decoding apparatus ranging from those proposed for prebiotic conditions to the ones involved in modern biology. Any primitive decoding machinery based on nucleic acids and proteins, and most likely the modern day system, has to satisfy certain geometrical constraints. The charged amino acyl and the peptidyl termini of successive adaptors have to be adjacent in space in order to satisfy the stereochemical requirements for amide bond formation. Simultaneously, the same adaptors have to recognize successive codons on the messenger. This translational complex has to be realized by components that obey nucleic acid conformational principles, stabilities, and specificities. This generalized condition greatly restricts the number of acceptable adaptor structures.

  3. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    PubMed

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.

  4. Capacities for theory of mind, metacognition, and neurocognitive function are independently related to emotional recognition in schizophrenia.

    PubMed

    Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo

    2014-09-30

    While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.

  5. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  6. [C-terminal fragment of ribosomal protein S15 is located at the decoding site of the human ribosome].

    PubMed

    Khaĭrulina, Iu S; Molotkov, M V; Bulygin, K N; Graĭfer, D M; Ven'iaminova, A G; Karpova, G G

    2008-01-01

    Protein S15 is a characteristic component of the mammalian 80S ribosome that neighbors mRNA codon at the decoding site and the downstream triplets. In this study we determined S15 protein fragments located close to mRNA positions +4 to +12 with respect to the first nucleotide of the P site codon on the human ribosome. For cross-linking to ribosomal protein S15, a set of mRNA was used that contained triplet UUU/UUC at the 5'-termini and a perfluorophenyl azide-modified uridine in position 3' of this triplet. The locations of mRNA analogues on the ribosome were governed by tRNAPhe cognate to the UUU/UUC triplet targeted to the P site. Cross-linked S15 protein was isolated from the irradiated with mild UV light complexes of 80S ribosomes with tRNAPhe and mRNA analogues with subsequent cleavage with CNBr that splits polypeptide chain after methionines. Analysis of modified oligopeptides resulted from the cleavage revealed that in all cases cross-linking site was located in C-terminal fragment 111-145 of protein S15 indicating that this fragment is involved in formation of decoding site of the eukaryotic ribosome.

  7. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  8. High data rate Reed-Solomon encoding and decoding using VLSI technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner; Morakis, James

    1987-01-01

    Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.

  9. The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?

    PubMed

    Maloney, Ryan T

    2015-01-01

    Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.

  10. Emotion Decoding and Incidental Processing Fluency as Antecedents of Attitude Certainty.

    PubMed

    Petrocelli, John V; Whitmire, Melanie B

    2017-07-01

    Previous research demonstrates that attitude certainty influences the degree to which an attitude changes in response to persuasive appeals. In the current research, decoding emotions from facial expressions and incidental processing fluency, during attitude formation, are examined as antecedents of both attitude certainty and attitude change. In Experiment 1, participants who decoded anger or happiness during attitude formation expressed their greater attitude certainty, and showed more resistance to persuasion than participants who decoded sadness. By manipulating the emotion decoded, the diagnosticity of processing fluency experienced during emotion decoding, and the gaze direction of the social targets, Experiment 2 suggests that the link between emotion decoding and attitude certainty results from incidental processing fluency. Experiment 3 demonstrated that fluency in processing irrelevant stimuli influences attitude certainty, which in turn influences resistance to persuasion. Implications for appraisal-based accounts of attitude formation and attitude change are discussed.

  11. Social Media Use, Friendship Quality, and the Moderating Role of Anxiety in Adolescents with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    van Schalkwyk, Gerrit I.; Marin, Carla E.; Ortiz, Mayra; Rolison, Max; Qayyum, Zheala; McPartland, James C.; Lebowitz, Eli R.; Volkmar, Fred R.; Silverman, Wendy K.

    2017-01-01

    Social media holds promise as a technology to facilitate social engagement, but may displace offline social activities. Adolescents with ASD are well suited to capitalize on the unique features of social media, which requires less decoding of complex social information. In this cross-sectional study, we assessed social media use, anxiety and…

  12. Design Experiments: Developing and Testing an Intervention for Elementary School-Age Students Who Use Non-Mainstream American English Dialects

    ERIC Educational Resources Information Center

    Thomas-Tate, Shurita; Connor, Carol McDonald; Johnson, Lakeisha

    2013-01-01

    Reading comprehension, defined as the active extraction and construction of meaning from all kinds of text, requires children to fluently decode and understand what they are reading. Basic processes underlying reading comprehension are complex and call on the oral language system and a conscious understanding of this system, i.e., metalinguistic…

  13. Reading Strategies in a L2: A Study on Machine Translation

    ERIC Educational Resources Information Center

    Karnal, Adriana Riess; Pereira, Vera Vanmacher

    2015-01-01

    This article aims at understanding cognitive strategies which are involved in reading academic texts in English as a L2/FL. Specifically, we focus on reading comprehension when a text is read either using Google translator or not. From this perspective we must consider the reading process in its complexity not only as a decoding process. We follow…

  14. Decoding Children's Expressions of Affect.

    ERIC Educational Resources Information Center

    Feinman, Joel A.; Feldman, Robert S.

    Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…

  15. Decoding Area Studies and Interdisciplinary Majors: Building a Framework for Entry-Level Students

    ERIC Educational Resources Information Center

    MacPherson, Kristina Ruth

    2015-01-01

    Decoding disciplinary expertise for novices is increasingly part of the undergraduate curriculum. But how might area studies and other interdisciplinary programs, which require integration of courses from multiple disciplines, decode expertise in a similar fashion? Additionally, as a part of decoding area studies and interdisciplines, how might a…

  16. 47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...

  17. 47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...

  18. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  19. Contributions of phonological awareness, phonological short-term memory, and rapid automated naming, toward decoding ability in students with mild intellectual disability.

    PubMed

    Soltani, Amanallah; Roslan, Samsilah

    2013-03-01

    Reading decoding ability is a fundamental skill to acquire word-specific orthographic information necessary for skilled reading. Decoding ability and its underlying phonological processing skills have been heavily investigated typically among developing students. However, the issue has rarely been noticed among students with intellectual disability who commonly suffer from reading decoding problems. This study is aimed at determining the contributions of phonological awareness, phonological short-term memory, and rapid automated naming, as three well known phonological processing skills, to decoding ability among 60 participants with mild intellectual disability of unspecified origin ranging from 15 to 23 years old. The results of the correlation analysis revealed that all three aspects of phonological processing are significantly correlated with decoding ability. Furthermore, a series of hierarchical regression analysis indicated that after controlling the effect of IQ, phonological awareness, and rapid automated naming are two distinct sources of decoding ability, but phonological short-term memory significantly contributes to decoding ability under the realm of phonological awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    PubMed

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  1. Mapping the coupled role of structure and materials in mechanics of platelet-matrix composites

    NASA Astrophysics Data System (ADS)

    Farzanian, Shafee; Shahsavari, Rouzbeh

    2018-03-01

    Despite significant progresses on understanding and mimicking the delicate nano/microstructure of biomaterials such as nacre, decoding the indistinguishable merger of materials and structures in controlling the tradeoff in mechanical properties has been long an engineering pursuit. Herein, we focus on an archetype platelet-matrix composite and perform ∼400 nonlinear finite element simulations to decode the complex interplay between various structural features and material characteristics in conferring the balance of mechanical properties. We study various combinatorial models expressed by four key dimensionless parameters, i.e. characteristic platelet length, matrix plasticity, platelet dissimilarity, and overlap offset, whose effects are all condensed in a new unifying parameter, defined as the multiplication of strength, toughness, and stiffness over composite volume. This parameter, which maximizes at a critical characteristic length, controls the transition from intrinsic toughening (matrix plasticity driven without crack growths) to extrinsic toughening phenomena involving progressive crack propagations. This finding, combined with various abstract volumetric and radar plots, will not only shed light on decoupling the complex role of structure and materials on mechanical performance and their trends, but provides important guidelines for designing lightweight staggered platelet-matrix composites while ensuring the best (balance) of their mechanical properties.

  2. Grasp movement decoding from premotor and parietal cortex.

    PubMed

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  3. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  4. Protecting the proteome: Eukaryotic cotranslational quality control pathways

    PubMed Central

    2014-01-01

    The correct decoding of messenger RNAs (mRNAs) into proteins is an essential cellular task. The translational process is monitored by several quality control (QC) mechanisms that recognize defective translation complexes in which ribosomes are stalled on substrate mRNAs. Stalled translation complexes occur when defects in the mRNA template, the translation machinery, or the nascent polypeptide arrest the ribosome during translation elongation or termination. These QC events promote the disassembly of the stalled translation complex and the recycling and/or degradation of the individual mRNA, ribosomal, and/or nascent polypeptide components, thereby clearing the cell of improper translation products and defective components of the translation machinery. PMID:24535822

  5. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.

    PubMed

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.

  6. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces

    PubMed Central

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170

  7. Synthetic bioactive novel ether based Schiff bases and their copper(II) complexes

    NASA Astrophysics Data System (ADS)

    Shabbir, Muhammad; Akhter, Zareen; Ismail, Hammad; Mirza, Bushra

    2017-10-01

    Novel ether based Schiff bases (HL1- HL4) were synthesized from 5-chloro-2-hydroxy benzaldehyde and primary amines (1-amino-4-phenoxybenzene, 4-(4-aminophenyloxy) biphenyl, 1-(4-aminophenoxy) naphthalene and 2-(4-aminophenoxy) naphthalene). From these Schiff bases copper(II) complexes (Cu(L1)2-Cu(L4)2)) were synthesized and characterized by elemental analysis and spectroscopic (FTIR, NMR) techniques. The synthesized Schiff bases and copper(II) complexes were further assessed for various biological studies. In brine shrimp assay the copper(II) complexes revealed 4-fold higher activity (LD50 3.8 μg/ml) as compared with simple ligands (LD50 12.4 μg/ml). Similar findings were observed in potato disc antitumor assay with higher activities for copper(II) complexes (IC50 range 20.4-24.1 μg/ml) than ligands (IC50 range 40.5-48.3 μg/ml). DPPH assay was performed to determine the antioxidant potential of the compounds. Significant antioxidant activity was shown by the copper(II) complexes whereas simple ligands have shown no activity. In DNA protection assay significant protection behavior was exhibited by simple ligand molecules while copper(II) complexes showed neutral behavior (neither protective nor damaging).

  8. Complexity-Based Measures Inform Effects of Tai Chi Training on Standing Postural Control: Cross-Sectional and Randomized Trial Studies.

    PubMed

    Wayne, Peter M; Gow, Brian J; Costa, Madalena D; Peng, C-K; Lipsitz, Lewis A; Hausdorff, Jeffrey M; Davis, Roger B; Walsh, Jacquelyn N; Lough, Matthew; Novak, Vera; Yeh, Gloria Y; Ahn, Andrew C; Macklin, Eric A; Manor, Brad

    2014-01-01

    Diminished control of standing balance, traditionally indicated by greater postural sway magnitude and speed, is associated with falls in older adults. Tai Chi (TC) is a multisystem intervention that reduces fall risk, yet its impact on sway measures vary considerably. We hypothesized that TC improves the integrated function of multiple control systems influencing balance, quantifiable by the multi-scale "complexity" of postural sway fluctuations. To evaluate both traditional and complexity-based measures of sway to characterize the short- and potential long-term effects of TC training on postural control and the relationships between sway measures and physical function in healthy older adults. A cross-sectional comparison of standing postural sway in healthy TC-naïve and TC-expert (24.5±12 yrs experience) adults. TC-naïve participants then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Postural sway was assessed before and after the training during standing on a force-plate with eyes-open (EO) and eyes-closed (EC). Anterior-posterior (AP) and medio-lateral (ML) sway speed, magnitude, and complexity (quantified by multiscale entropy) were calculated. Single-legged standing time and Timed-Up-and-Go tests characterized physical function. At baseline, compared to TC-naïve adults (n = 60, age 64.5±7.5 yrs), TC-experts (n = 27, age 62.8±7.5 yrs) exhibited greater complexity of sway in the AP EC (P = 0.023), ML EO (P<0.001), and ML EC (P<0.001) conditions. Traditional measures of sway speed and magnitude were not significantly lower among TC-experts. Intention-to-treat analyses indicated no significant effects of short-term TC training; however, increases in AP EC and ML EC complexity amongst those randomized to TC were positively correlated with practice hours (P = 0.044, P = 0.018). Long- and short-term TC training were positively associated with physical function. Multiscale entropy offers a complementary approach to traditional COP measures for characterizing sway during quiet standing, and may be more sensitive to the effects of TC in healthy adults. ClinicalTrials.gov NCT01340365.

  9. Crystal Structure of Mistletoe Lectin I (ML-I) from Viscum album in Complex with 4-N-Furfurylcytosine at 2.85 Å Resolution.

    PubMed

    Ahmad, Malik Shoaib; Rasheed, Saima; Falke, Sven; Khaliq, Binish; Perbandt, Markus; Choudhary, M Iqbal; Markiewicz, Wojciech T; Barciszewski, Jan; Betzel, Christian

    2018-05-23

    Viscum album (the European mistletoe) is a semi-parasitic plant, which is of high medical interest. It is widely found in Europe, Asia and North America. It contains at least three distinct lectins (i.e. ML-I, II, and III), varying in molecular mass and specificity. Among them ML-I is in focus of medical research for various activities, including anti-cancer activities. To understand the molecular basis for such medical applications a few studies have already addressed the structural and functional analysis of ML-I in complex with ligands. In continuation of these efforts, we are reporting the crystal structure of ML from Viscum album in complex with the nucleic acid oxidation product 4-N-furfurylcytosine (FC) refined to 2.85 Å resolution. FC is known to be involved in different metabolic pathways related to oxidative stress and DNA modification. X-ray suitable hexagonal crystals of the ML-I/FC complex were grown within four days at 294 K using the hanging drop vapor diffusion method. Diffraction data were collected up to a resolution of 2.85 Å. The ligand affinity was verified via in-silico docking. The high-resolution structure was refined subsequently to analyze particularly the active site conformation and binding epitope of 4-N-furfurylcytosine. A distinct 2Fo-Fc electron density at the active site was interpreted as a single FC molecule. The specific binding of FC is achieved also through hydrophobic interactions involving Tyr76A, Tyr115A, Glu165A, and Leu157A of the ML-I A-chain. The binding energy of FC to the active site of ML-I was calculated as well to be -6.03 kcal mol-1. In comparison to other reported ML-I complexes we observed distinct differences in the vicinity of the nucleic acid base binding site upon interaction with FC. Therefore, data obtained will provide new insights in understanding the specificity, inhibition and cytotoxicity of the ML-I A-chain and related RIPs. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.

    PubMed

    Gong, Ping; Kolios, Michael C; Xu, Yuan

    2016-09-01

    Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.

  11. In-vitro activity of taurolidine on single species and a multispecies population associated with periodontitis.

    PubMed

    Zollinger, Lilly; Schnyder, Simone; Nietzsche, Sandor; Sculean, Anton; Eick, Sigrun

    2015-04-01

    The antimicrobial activity of taurolidine was compared with minocycline against microbial species associated with periodontitis (four single strains and a 12-species mixture). Minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs), killing as well as activities on established and forming single-species biofilms and a 12-species biofilm were determined. The MICs of taurolidine against single species were always 0.31 mg/ml, the MBCs were 0.64 mg/ml. The used mixed microbiota was less sensitive to taurolidine, MIC and the MBC was 2.5 mg/ml. The strains and the mixture were completely killed by 2.5 mg/ml taurolidine, whereas 256 μg/ml minocycline reduced the bacterial counts of the mixture by 5 log10 colony forming units (cfu). Coating the surface with 10 mg/ml taurolidine or 256 μg/ml minocycline prevented completely biofilm formation of Porphyromonas gingivalis ATCC 33277 but not of Aggregatibacter actinomycetemcomitans Y4 and the mixture. On 4.5 d old biofilms, taurolidine acted concentration dependent with a reduction by 5 log10 cfu (P. gingivalis ATCC 33277) and 7 log10 cfu (A. actinomycetemcomitans Y4) when applying 10 mg/ml. Minocycline decreased the cfu counts by 1-2 log10 cfu independent of the used concentration. The reduction of the cfu counts in the 4.5 d old multi-species biofilms was about 3 log10 cfu after application of any minocycline concentration and after using 10 mg/ml taurolidine. Taurolidine is active against species associated with periodontitis, even within biofilms. Nevertheless a complete elimination of complex biofilms by taurolidine seems to be impossible and underlines the importance of a mechanical removal of biofilms prior to application of taurolidine. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Lahmeyer, Charles R. (Inventor)

    1987-01-01

    A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.

  13. A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mo, C. D.

    1978-01-01

    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.

  14. Large-Constraint-Length, Fast Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.

    1990-01-01

    Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.

  15. Complex sparse spatial filter for decoding mixed frequency and phase coded steady-state visually evoked potentials.

    PubMed

    Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul

    2018-07-01

    Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Locating and decoding barcodes in fuzzy images captured by smart phones

    NASA Astrophysics Data System (ADS)

    Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.

  17. Data Networks Reliability

    DTIC Science & Technology

    1988-10-03

    full achievable region is achievable if there is only a bounded degree of asynchronism. E. Arikan , in a Ph.D. thesis [Ari85], extended sequential...real co-operation is required to reduce the number of transmissions to O(log log N). 14 REFERENCES [Ari85] E. Arikan , "Sequential Decoding for Multiple

  18. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  19. Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia

    NASA Astrophysics Data System (ADS)

    Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad

    2016-09-01

    Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.

  20. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  1. CBL-CIPK network for calcium signaling in higher plants

    NASA Astrophysics Data System (ADS)

    Luan, Sheng

    Plants sense their environment by signaling mechanisms involving calcium. Calcium signals are encoded by a complex set of parameters and decoded by a large number of proteins including the more recently discovered CBL-CIPK network. The calcium-binding CBL proteins specifi-cally interact with a family of protein kinases CIPKs and regulate the activity and subcellular localization of these kinases, leading to the modification of kinase substrates. This represents a paradigm shift as compared to a calcium signaling mechanism from yeast and animals. One example of CBL-CIPK signaling pathways is the low-potassium response of Arabidopsis roots. When grown in low-K medium, plants develop stronger K-uptake capacity adapting to the low-K condition. Recent studies show that the increased K-uptake is caused by activation of a specific K-channel by the CBL-CIPK network. A working model for this regulatory pathway will be discussed in the context of calcium coding and decoding processes.

  2. A flood map based DOI decoding method for block detector: a GATE simulation study.

    PubMed

    Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu

    2014-01-01

    Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.

  3. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  4. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  5. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  6. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  7. Decoding continuous three-dimensional hand trajectories from epidural electrocorticographic signals in Japanese macaques

    NASA Astrophysics Data System (ADS)

    Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka

    2012-06-01

    Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.

  8. Natural Indices for the Chemical Hardness/Softness of Metal Cations and Ligands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Huifang; Xu, David C.; Wang, Yifeng

    Quantitative understanding of reactivity and stability for a chemical species is fundamental to chemistry. The concept has undergone many changes and additions throughout the history of chemistry, stemming from the ideas such as Lewis acids and bases. For a given complexing ligand (Lewis base) and a group of isovalent metal cations (Lewis acids), the stability constants of metal–ligand (ML) complexes can simply correlate to the known properties of metal ions [ionic radii (r Mn+), Gibbs free energy of formation (ΔG° f,Mn+), and solvation energy (ΔG° s,Mn+)] by 2.303RT log K ML = (α* MLΔG° f,Mn+ – β* MLr Mn+ +more » γ* MLΔG° s,Mn+ – δ* ML), where the coefficients (α* ML, β* ML, γ* ML, and intercept δ* ML) are determined by fitting the equation to the existing experimental data. Coefficients β* ML and γ* ML have the same sign and are in a linear relationship through the origin. Gibbs free energies of formation of cations (ΔG° f,Mn+) are found to be natural indices for the softness or hardness of metal cations, with positive values corresponding to soft acids and negative values to hard acids. The coefficient α* ML is an index for the softness or hardness of a complexing ligand. Proton (H +) with the softness index of zero is a unique acid that has strong interactions with both soft and hard bases. The stability energy resulting from the acid–base interactions is determined by the term α* MLΔG° f,Mn+; a positive product of α* ML and ΔG° f,Mn+ indicates that the acid–base interaction between the metal cation and the complexing ligand stabilizes the complex. The terms β* MLr Mn+ and γ* MLΔG° s,Mn+, which are related to ionic radii of metal cations, represent the steric and solvation effects of the cations. The new softness indices proposed here will help to understand the interactions of ligands (Lewis bases) with metal cations (Lewis acids) and provide guidelines for engineering materials with desired chemical reactivity and selectivity. As a result, the new correlation can also enhance our ability for predicting the speciation, mobility, and toxicity of heavy metals in the earth environments and biological systems.« less

  9. Natural Indices for the Chemical Hardness/Softness of Metal Cations and Ligands

    DOE PAGES

    Xu, Huifang; Xu, David C.; Wang, Yifeng

    2017-10-26

    Quantitative understanding of reactivity and stability for a chemical species is fundamental to chemistry. The concept has undergone many changes and additions throughout the history of chemistry, stemming from the ideas such as Lewis acids and bases. For a given complexing ligand (Lewis base) and a group of isovalent metal cations (Lewis acids), the stability constants of metal–ligand (ML) complexes can simply correlate to the known properties of metal ions [ionic radii (r Mn+), Gibbs free energy of formation (ΔG° f,Mn+), and solvation energy (ΔG° s,Mn+)] by 2.303RT log K ML = (α* MLΔG° f,Mn+ – β* MLr Mn+ +more » γ* MLΔG° s,Mn+ – δ* ML), where the coefficients (α* ML, β* ML, γ* ML, and intercept δ* ML) are determined by fitting the equation to the existing experimental data. Coefficients β* ML and γ* ML have the same sign and are in a linear relationship through the origin. Gibbs free energies of formation of cations (ΔG° f,Mn+) are found to be natural indices for the softness or hardness of metal cations, with positive values corresponding to soft acids and negative values to hard acids. The coefficient α* ML is an index for the softness or hardness of a complexing ligand. Proton (H +) with the softness index of zero is a unique acid that has strong interactions with both soft and hard bases. The stability energy resulting from the acid–base interactions is determined by the term α* MLΔG° f,Mn+; a positive product of α* ML and ΔG° f,Mn+ indicates that the acid–base interaction between the metal cation and the complexing ligand stabilizes the complex. The terms β* MLr Mn+ and γ* MLΔG° s,Mn+, which are related to ionic radii of metal cations, represent the steric and solvation effects of the cations. The new softness indices proposed here will help to understand the interactions of ligands (Lewis bases) with metal cations (Lewis acids) and provide guidelines for engineering materials with desired chemical reactivity and selectivity. As a result, the new correlation can also enhance our ability for predicting the speciation, mobility, and toxicity of heavy metals in the earth environments and biological systems.« less

  10. A mask manufacturer's perspective on maskless lithography

    NASA Astrophysics Data System (ADS)

    Buck, Peter; Biechler, Charles; Kalk, Franklin

    2005-11-01

    Maskless Lithography (ML2) is again being considered for use in mainstream CMOS IC manufacturing. Sessions at technical conferences are being devoted to ML2. A multitude of new companies have been formed in the last several years to apply new concepts to breaking the throughput barrier that has in the past prevented ML2 from achieving the cost and cycle time performance necessary to become economically viable, except in rare cases. Has Maskless Lithography's (we used to call it "Direct Write Lithography") time really come? If so, what is the expected impact on the mask manufacturer and does it matter? The lithography tools used today in mask manufacturing are similar in concept to ML2 except for scale, both in throughput and feature size. These mask tools produce highly accurate lithographic images directly from electronic pattern files, perform multi-layer overlay, and mix-n-match across multiple tools, tool types and sites. Mask manufacturers are already accustomed to the ultimate low volume - one substrate per design layer. In order to achieve the economically required throughput, proposed ML2 systems eliminate or greatly reduce some of the functions that are the source of the mask writer's accuracy. Can these ML2 systems meet the demanding lithographic requirements without these functions? ML2 may eliminate the reticle but many of the processes and procedures performed today by the mask manufacturer are still required. Examples include the increasingly complex mask data preparation step and the verification performed to ensure that the pattern on the reticle is accurately representing the design intent. The error sources that are fixed on a reticle are variable with time on an ML2 system. It has been proposed that if ML2 is successful it will become uneconomical to be in the mask business - that ML2, by taking the high profit masks will take all profitability out of mask manufacturing and thereby endanger the entire semiconductor industry. Others suggest that a successful ML2 system solves the mask cost issue and thereby reduces the need and attractiveness of ML2. Are these concerns valid? In this paper we will present a perspective on maskless lithography from the considerable "direct write" experience of a mask manufacturer. We will examine the various business models proposed for ML2 insertion as well as the key technical challenges to achieving simultaneously the throughput and the lithographic quality necessary to become economically viable. We will consider the question of the economic viability of the mask industry in a post-ML2 world and will propose possible models where the mask industry can meaningfully participate.

  11. New spectrophotometric methods for the determinations of hydrogen sulfide present in the samples of lake water, industrial effluents, tender coconut, sugarcane juice and egg.

    PubMed

    Shyla, B; Nagendrappa, G

    2012-10-01

    The new methods are working on the principle that iron(III) is reduced to iron(II) by hydrogen sulfide, catechol and p-toluidine the system 1/hydrogen sulfide the system 2, in acidic medium followed by the reduced iron forming complex with 1,10-phenanthroline with λ(max) 510 nm. The other two methods are based on redox reactions between electrolytically generated manganese(III) sulfate taken in excess and hydrogen sulfide followed by the unreacted oxidant oxidizing diphenylamine λ(max) 570 the system 3/barium diphenylamine sulphonate λ(max) 540 nm, the system 4. The increase/decrease in the color intensity of the dye products of the systems 1 and 2 or 3 and 4 are proportional to the concentration of hydrogen sulfide with its quantification range 0.035-1.40 μg ml(-1)/0.14-1.40 μg ml(-1). Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  13. Fast transform decoding of nonsystematic Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.

    1989-01-01

    A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.

  14. Polar Coding with CRC-Aided List Decoding

    DTIC Science & Technology

    2015-08-01

    TECHNICAL REPORT 2087 August 2015 Polar Coding with CRC-Aided List Decoding David Wasserman Approved...list decoding . RESULTS Our simulation results show that polar coding can produce results very similar to the FEC used in the Digital Video...standard. RECOMMENDATIONS In any application for which the DVB-S2 FEC is considered, polar coding with CRC-aided list decod - ing with N = 65536

  15. A Mutation in the PP2C Phosphatase Gene in a Staphylococcus aureus USA300 Clinical Isolate with Reduced Susceptibility to Vancomycin and Daptomycin

    PubMed Central

    Passalacqua, Karla D.; Satola, Sarah W.; Crispell, Emily K.

    2012-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA) strains with reduced susceptibility to vancomycin (MIC of 4 to 8 μg/ml) are referred to as vancomycin-intermediate S. aureus (VISA). In this study, we characterized two isogenic USA300 S. aureus isolates collected sequentially from a single patient with endocarditis where the S. aureus isolate changed from being susceptible to vancomycin (VSSA) (1 μg/ml) to VISA (8 μg/ml). In addition, the VISA isolate lost beta-lactamase activity and showed increased resistance to daptomycin and linezolid. The two strains did not differ in growth rate, but the VISA isolate had a thickened cell wall and was less autolytic. Transcriptome sequencing (RNA-seq) analysis comparing the two isolates grown to late exponential phase showed significant differences in transcription of cell surface protein genes (spa, SBI [second immunoglobulin-binding protein of S. aureus], and fibrinogen-binding proteins), regulatory genes (agrBCA, RNAIII, sarT, and saeRS), and others. Using whole-genome shotgun resequencing, we identified 6 insertion/deletion mutations between the VSSA and VISA isolates. A protein phosphatase 2C (PP2C) family phosphatase had a 6-bp (nonframeshift) insertion mutation in a highly conserved metal binding domain. Complementation of the clinical VISA isolate with a wild-type copy of the PP2C gene reduced the vancomycin and daptomycin MICs and increased autolytic activity, suggesting that this gene contributed to the reduced vancomycin susceptibility phenotype acquired in vivo. Creation of de novo mutants from the VSSA strain resulted in different mutations, demonstrating that reduced susceptibility to vancomycin in USA300 strains can occur via multiple routes, highlighting the complex nature of the VISA phenotype. PMID:22850507

  16. Decoding position, velocity, or goal: does it matter for brain-machine interfaces?

    PubMed

    Marathe, A R; Taylor, D M

    2011-04-01

    Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.

  17. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  18. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  19. Decoding position, velocity, or goal: Does it matter for brain-machine interfaces?

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2011-04-01

    Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.

  20. Improved HDRG decoders for qudit and non-Abelian quantum error correction

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Loss, Daniel; Wootton, James R.

    2015-03-01

    Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.

  1. Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI

    PubMed Central

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885

  2. Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.

    PubMed

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.

  3. ML Frame Synchronization for OFDM Systems Using a Known Pilot and Cyclic Prefixes

    NASA Astrophysics Data System (ADS)

    Huh, Heon

    Orthogonal frequency-division multiplexing (OFDM) is a popular air interface technology that is adopted as a standard modulation scheme for 4G communication systems owing to its excellent spectral efficiency. For OFDM systems, synchronization problems have received much attention along with peak-to-average power ratio (PAPR) reduction. In addition to frequency offset estimation, frame synchronization is a challenging problem that must be solved to achieve optimal system performance. In this paper, we present a maximum likelihood (ML) frame synchronizer for OFDM systems. The synchronizer exploits a synchronization word and cyclic prefixes together to improve the synchronization performance. Numerical results show that the performance of the proposed frame synchronizer is better than that of conventional schemes. The proposed synchronizer can be used as a reference for evaluating the performance of other suboptimal frame synchronizers. We also modify the proposed frame synchronizer to reduce the implementation complexity and propose a near-ML synchronizer for time-varying fading channels.

  4. Charge–transfer reaction of 2,3-dichloro-1,4-naphthoquinone with crizotinib: Spectrophotometric study, computational molecular modeling and use in development of microwell assay for crizotinib

    PubMed Central

    Alzoman, Nourah Z.; Alshehri, Jamilah M.; Darwish, Ibrahim A.; Khalil, Nasr Y.; Abdel-Rahman, Hamdy M.

    2014-01-01

    The reaction of 2,3-dichloro-1,4-naphthoquinone (DCNQ) with crizotinib (CZT; a novel drug used for treatment of non-small cell lung cancer) was investigated in different solvents of varying dielectric constants and polarity indexes. The reaction produced a red-colored product. Spectrophotometric investigations confirmed that the reaction proceeded through charge–transfer (CT) complex formation. The molar absorptivity of the complex was found to be linearly correlated with the dielectric constant and polarity index of the solvent; the correlation coefficients were 0.9567 and 0.9069, respectively. The stoichiometric ratio of DCNQ:CZT was found to be 2:1 and the association constant of the complex was found to be 1.07 × 102 l/mol. The kinetics of the reaction was studied; the order of the reaction, rate and rate constant were determined. Computational molecular modeling for the complex between DCNQ and CZT was conducted, the sites of interaction on CZT molecule were determined, and the mechanism of the reaction was postulated. The reaction was employed as a basis in the development of a novel 96-microwell assay for CZT in a linear range of 4–500 μg/ml. The assay limits of detection and quantitation were 2.06 and 6.23 μg/ml, respectively. The assay was validated as per the guidelines of the International Conference on Harmonization (ICH) and successfully applied to the analysis of CZT in its bulk and capsules with good accuracy and precision. The assay has high throughput and consumes a minimum volume of organic solvents thus it reduces the exposures of the analysts to the toxic effects of organic solvents, and significantly reduces the analysis cost. PMID:25685046

  5. Rendering of 3D-wavelet-compressed concentric mosaic scenery with progressive inverse wavelet synthesis (PIWS)

    NASA Astrophysics Data System (ADS)

    Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin

    2000-05-01

    The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.

  6. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  7. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically-sensitive design

    PubMed Central

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert

    2013-01-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294

  8. Decoding brain activity using a large-scale probabilistic functional-anatomical atlas of human cognition

    PubMed Central

    Jones, Michael N.

    2017-01-01

    A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185

  9. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically-sensitive design.

    PubMed

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A; Plomin, Robert

    2012-08-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years.

  10. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  11. BindML/BindML+: Detecting Protein-Protein Interaction Interface Propensity from Amino Acid Substitution Patterns.

    PubMed

    Wei, Qing; La, David; Kihara, Daisuke

    2017-01-01

    Prediction of protein-protein interaction sites in a protein structure provides important information for elucidating the mechanism of protein function and can also be useful in guiding a modeling or design procedures of protein complex structures. Since prediction methods essentially assess the propensity of amino acids that are likely to be part of a protein docking interface, they can help in designing protein-protein interactions. Here, we introduce BindML and BindML+ protein-protein interaction sites prediction methods. BindML predicts protein-protein interaction sites by identifying mutation patterns found in known protein-protein complexes using phylogenetic substitution models. BindML+ is an extension of BindML for distinguishing permanent and transient types of protein-protein interaction sites. We developed an interactive web-server that provides a convenient interface to assist in structural visualization of protein-protein interactions site predictions. The input data for the web-server are a tertiary structure of interest. BindML and BindML+ are available at http://kiharalab.org/bindml/ and http://kiharalab.org/bindml/plus/ .

  12. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  13. Beyond ORF: Student-Level Predictors of Reading Achievement

    ERIC Educational Resources Information Center

    Canto, Angela I.; Proctor, Briley E.

    2013-01-01

    This study explored student-level predictors of reading achievement among third grade regular education students. Predictors included student demographics (sex and socioeconomic status (SES), using free and reduced lunch as proxy for SES), direct observations of reading skills (oral reading fluency (ORF) and word decoding skill (nonsense word…

  14. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  15. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Building Capacity for Professional Development in Adolescent Reading: The National Writing Project's National Reading Initiative. Evaluation Summary Report 2003-2006

    ERIC Educational Resources Information Center

    Academy for Educational Development, 2007

    2007-01-01

    An area of particular concern in adolescent literacy is comprehension of informational text: many students can successfully decode words without actually being able to understand the texts they read. As they progress through school, they have to read increasingly complex texts but receive little if any explicit instruction to help them. Beyond the…

  17. Properties of a certain stochastic dynamical system, channel polarization, and polar codes

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2010-06-01

    A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.

  18. Telemetry coding study for the international magnetosphere explorers, mother/daughter and heliocentric missions. Volume 2: Final report

    NASA Technical Reports Server (NTRS)

    Cartier, D. E.

    1973-01-01

    A convolutional coding theory is given for the IME and the Heliocentric spacecraft. The amount of coding gain needed by the mission is determined. Recommendations are given for an encoder/decoder system to provide the gain along with an evaluation of the impact of the system on the space network in terms of costs and complexity.

  19. Complete Decoding and Reporting of Aviation Routine Weather Reports (METARs)

    NASA Technical Reports Server (NTRS)

    Lui, Man-Cheung Max

    2014-01-01

    Aviation Routine Weather Report (METAR) provides surface weather information at and around observation stations, including airport terminals. These weather observations are used by pilots for flight planning and by air traffic service providers for managing departure and arrival flights. The METARs are also an important source of weather data for Air Traffic Management (ATM) analysts and researchers at NASA and elsewhere. These researchers use METAR to correlate severe weather events with local or national air traffic actions that restrict air traffic, as one example. A METAR is made up of multiple groups of coded text, each with a specific standard coding format. These groups of coded text are located in two sections of a report: Body and Remarks. The coded text groups in a U.S. METAR are intended to follow the coding standards set by National Oceanic and Atmospheric Administration (NOAA). However, manual data entry and edits made by a human report observer may result in coded text elements that do not follow the standards, especially in the Remarks section. And contrary to the standards, some significant weather observations are noted only in the Remarks section and not in the Body section of the reports. While human readers can infer the intended meaning of non-standard coding of weather conditions, doing so with a computer program is far more challenging. However such programmatic pre-processing is necessary to enable efficient and faster database query when researchers need to perform any significant historical weather analysis. Therefore, to support such analysis, a computer algorithm was developed to identify groups of coded text anywhere in a report and to perform subsequent decoding in software. The algorithm considers common deviations from the standards and data entry mistakes made by observers. The implemented software code was tested to decode 12 million reports and the decoding process was able to completely interpret 99.93 of the reports. This document presents the deviations from the standards and the decoding algorithm. Storing all decoded data in a database allows users to quickly query a large amount of data and to perform data mining on the data. Users can specify complex query criteria not only on date or airport but also on weather condition. This document also describes the design of a database schema for storing the decoded data, and a Data Warehouse web application that allows users to perform reporting and analysis on the decoded data. Finally, this document presents a case study correlating dust storms reported in METARs from the Phoenix International airport with Ground Stops issued by Air Route Traffic Control Centers (ATCSCC). Blowing widespread dust is one of the weather conditions when dust storm occurs. By querying the database, 294 METARs were found to report blowing widespread dust at the Phoenix airport and 41 of them reported such condition only in the Remarks section of the reports. When METAR is a data source for an ATM research, it is important to include weather conditions not only from the Body section but also from the Remarks section of METARs.

  20. A Systolic VLSI Design of a Pipeline Reed-solomon Decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1984-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  1. A VLSI design of a pipeline Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1985-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  2. Coding/decoding two-dimensional images with orbital angular momentum of light.

    PubMed

    Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping

    2016-04-01

    We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.

  3. To sort or not to sort: the impact of spike-sorting on neural decoding performance.

    PubMed

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  4. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    NASA Astrophysics Data System (ADS)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  5. Volumetric Analysis of Regional Variability in the Cerebellum of Children with Dyslexia

    PubMed Central

    Stuebing, Karla; Juranek, Jenifer; Fletcher, Jack M.

    2013-01-01

    Cerebellar deficits and subsequent impairment in procedural learning may contribute to both motor difficulties and reading impairment in dyslexia. We used quantitative magnetic resonance imaging to investigate the role of regional variation in cerebellar anatomy in children with single-word decoding impairments (N=23), children with impairment in fluency alone (N=8), and typically developing children (N=16). Children with decoding impairments (dyslexia) demonstrated no statistically significant differences in overall grey and white matter volumes or cerebellar asymmetry; however, reduced volume in the anterior lobe of the cerebellum relative to typically developing children was observed. These results implicate cerebellar involvement in dyslexia and establish an important foundation for future research on the connectivity of the cerebellum and cortical regions typically associated with reading impairment. PMID:23828023

  6. Volumetric analysis of regional variability in the cerebellum of children with dyslexia.

    PubMed

    Fernandez, Vindia G; Stuebing, Karla; Juranek, Jenifer; Fletcher, Jack M

    2013-12-01

    Cerebellar deficits and subsequent impairment in procedural learning may contribute to both motor difficulties and reading impairment in dyslexia. We used quantitative magnetic resonance imaging to investigate the role of regional variation in cerebellar anatomy in children with single-word decoding impairments (N = 23), children with impairment in fluency alone (N = 8), and typically developing children (N = 16). Children with decoding impairments (dyslexia) demonstrated no statistically significant differences in overall grey and white matter volumes or cerebellar asymmetry; however, reduced volume in the anterior lobe of the cerebellum relative to typically developing children was observed. These results implicate cerebellar involvement in dyslexia and establish an important foundation for future research on the connectivity of the cerebellum and cortical regions typically associated with reading impairment.

  7. Methodology and method and apparatus for signaling with capacity optimized constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2011-01-01

    Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.

  8. 47 CFR 11.33 - EAS Decoder.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... time periods expire. (4) Display and logging. A visual message shall be developed from any valid header... input. (8) Decoder Programming. Access to decoder programming shall be protected by a lock or other...

  9. Anion-π interaction in metal-organic networks formed by metal halides and tetracyanopyrazine

    NASA Astrophysics Data System (ADS)

    Rosokha, Sergiy V.; Kumar, Amar

    2017-06-01

    Co-crystallization of tetracyanopyrazine, TCP, with the tetraalkylammonium salts of linear [CuBr2]-, planar [PtCl4]2- or [Pt2Br6]2-, or octahedral [PtBr6]2- complexes resulted in formation of the alternating [MlXn]m-/TCP stacks separated by the Alk4N+ cations. These hybrid stacks showed multiple short contacts between halide ligands of the [MlXn]m- complexes and carbon atoms of the TCP acceptor indicating strong anion-π bonding between these species. It confirmed that the anion-π interaction is sufficiently strong to bring together such disparate components as ionic metal complexes and neutral aromatic molecules regardless of the geometry of the coordination compound. Structural features of the solid-state stacks and [MlXn]m-·TCP dyads resulted from the quantum-mechanical computations suggests that the molecular-orbital (weakly-covalent) component play an important role in association of the [MlXn]m- complexes with the TCP acceptor.

  10. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  11. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  12. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  13. Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-01-01

    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

  14. Visual coding with a population of direction-selective neurons.

    PubMed

    Fiscella, Michele; Franke, Felix; Farrow, Karl; Müller, Jan; Roska, Botond; da Silveira, Rava Azeredo; Hierlemann, Andreas

    2015-10-01

    The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. Copyright © 2015 the American Physiological Society.

  15. Visual coding with a population of direction-selective neurons

    PubMed Central

    Farrow, Karl; Müller, Jan; Roska, Botond; Azeredo da Silveira, Rava; Hierlemann, Andreas

    2015-01-01

    The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. PMID:26289471

  16. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  17. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  18. All-in-one visual and computer decoding of multiple secrets: translated-flip VC with polynomial-style sharing

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen

    2017-06-01

    This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.

  19. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  20. Decoding the Semantic Content of Natural Movies from Human Brain Activity

    PubMed Central

    Huth, Alexander G.; Lee, Tyler; Nishimoto, Shinji; Bilenko, Natalia Y.; Vu, An T.; Gallant, Jack L.

    2016-01-01

    One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI. PMID:27781035

  1. On the decoding process in ternary error-correcting output codes.

    PubMed

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  2. Spatial Lattice Modulation for MIMO Systems

    NASA Astrophysics Data System (ADS)

    Choi, Jiwook; Nam, Yunseo; Lee, Namyoon

    2018-06-01

    This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.

  3. Critical attributes of transdermal drug delivery system (TDDS)--a generic product development review.

    PubMed

    Ruby, P K; Pathak, Shriram M; Aggarwal, Deepika

    2014-11-01

    Bioequivalence testing of transdermal drug delivery systems (TDDS) has always been a subject of high concern for generic companies due to the formulation complexity and the fact that they are subtle to even minor manufacturing differences and hence should be clearly qualified in terms of quality, safety and efficacy. In recent times bioequivalence testing of transdermal patches has gained a global attention and many regulatory authorities worldwide have issued recommendations to set specific framework for demonstrating equivalence between two products. These current regulatory procedures demand a complete characterization of the generic formulation in terms of its physicochemical sameness, pharmacokinetics disposition, residual content and/or skin irritation/sensitization testing with respect to the reference formulation. This paper intends to highlight critical in vitro tests in assessing the therapeutic equivalence of products and also outlines their valuable applications in generic product success. Understanding these critical in vitro parameters can probably help to decode the complex bioequivalence outcomes, directing the generic companies to optimize the formulation design in reduced time intervals. It is difficult to summarize a common platform which covers all possible transdermal products; hence few case studies based on this approach has been presented in this review.

  4. Microfluidic Pneumatic Logic Circuits and Digital Pneumatic Microprocessors for Integrated Microfluidic Systems

    PubMed Central

    Rhee, Minsoung

    2010-01-01

    We have developed pneumatic logic circuits and microprocessors built with microfluidic channels and valves in polydimethylsiloxane (PDMS). The pneumatic logic circuits perform various combinational and sequential logic calculations with binary pneumatic signals (atmosphere and vacuum), producing cascadable outputs based on Boolean operations. A complex microprocessor is constructed from combinations of various logic circuits and receives pneumatically encoded serial commands at a single input line. The device then decodes the temporal command sequence by spatial parallelization, computes necessary logic calculations between parallelized command bits, stores command information for signal transportation and maintenance, and finally executes the command for the target devices. Thus, such pneumatic microprocessors will function as a universal on-chip control platform to perform complex parallel operations for large-scale integrated microfluidic devices. To demonstrate the working principles, we have built 2-bit, 3-bit, 4-bit, and 8-bit microprecessors to control various target devices for applications such as four color dye mixing, and multiplexed channel fluidic control. By significantly reducing the need for external controllers, the digital pneumatic microprocessor can be used as a universal on-chip platform to autonomously manipulate microfluids in a high throughput manner. PMID:19823730

  5. Hierarchical scheme for detecting the rotating MIMO transmission of the in-door RGB-LED visible light wireless communications using mobile-phone camera

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-01-01

    Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.

  6. A model for sequential decoding overflow due to a noisy carrier reference. [communication performance prediction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.

  7. Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang

    2013-04-01

    Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.

  8. State-space decoding of primary afferent neuron firing rates

    NASA Astrophysics Data System (ADS)

    Wagenaar, J. B.; Ventura, V.; Weber, D. J.

    2011-02-01

    Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.

  9. Utilizing sensory prediction errors for movement intention decoding: A new methodology

    PubMed Central

    Nakamura, Keigo; Ando, Hideyuki

    2018-01-01

    We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195

  10. Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI

    PubMed Central

    Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer

    2016-01-01

    There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673

  11. Complexity-Based Measures Inform Effects of Tai Chi Training on Standing Postural Control: Cross-Sectional and Randomized Trial Studies

    PubMed Central

    Wayne, Peter M.; Gow, Brian J.; Costa, Madalena D.; Peng, C.-K.; Lipsitz, Lewis A.; Hausdorff, Jeffrey M.; Davis, Roger B.; Walsh, Jacquelyn N.; Lough, Matthew; Novak, Vera; Yeh, Gloria Y.; Ahn, Andrew C.; Macklin, Eric A.; Manor, Brad

    2014-01-01

    Background Diminished control of standing balance, traditionally indicated by greater postural sway magnitude and speed, is associated with falls in older adults. Tai Chi (TC) is a multisystem intervention that reduces fall risk, yet its impact on sway measures vary considerably. We hypothesized that TC improves the integrated function of multiple control systems influencing balance, quantifiable by the multi-scale “complexity” of postural sway fluctuations. Objectives To evaluate both traditional and complexity-based measures of sway to characterize the short- and potential long-term effects of TC training on postural control and the relationships between sway measures and physical function in healthy older adults. Methods A cross-sectional comparison of standing postural sway in healthy TC-naïve and TC-expert (24.5±12 yrs experience) adults. TC-naïve participants then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Postural sway was assessed before and after the training during standing on a force-plate with eyes-open (EO) and eyes-closed (EC). Anterior-posterior (AP) and medio-lateral (ML) sway speed, magnitude, and complexity (quantified by multiscale entropy) were calculated. Single-legged standing time and Timed-Up–and-Go tests characterized physical function. Results At baseline, compared to TC-naïve adults (n = 60, age 64.5±7.5 yrs), TC-experts (n = 27, age 62.8±7.5 yrs) exhibited greater complexity of sway in the AP EC (P = 0.023), ML EO (P<0.001), and ML EC (P<0.001) conditions. Traditional measures of sway speed and magnitude were not significantly lower among TC-experts. Intention-to-treat analyses indicated no significant effects of short-term TC training; however, increases in AP EC and ML EC complexity amongst those randomized to TC were positively correlated with practice hours (P = 0.044, P = 0.018). Long- and short-term TC training were positively associated with physical function. Conclusion Multiscale entropy offers a complementary approach to traditional COP measures for characterizing sway during quiet standing, and may be more sensitive to the effects of TC in healthy adults. Trial Registration ClinicalTrials.gov NCT01340365 PMID:25494333

  12. Exploring Differential Effects across Two Decoding Treatments on Item-Level Transfer in Children with Significant Word Reading Difficulties: A New Approach for Testing Intervention Elements

    ERIC Educational Resources Information Center

    Steacy, Laura M.; Elleman, Amy M.; Lovett, Maureen W.; Compton, Donald L.

    2016-01-01

    In English, gains in decoding skill do not map directly onto increases in word reading. However, beyond the Self-Teaching Hypothesis, little is known about the transfer of decoding skills to word reading. In this study, we offer a new approach to testing specific decoding elements on transfer to word reading. To illustrate, we modeled word-reading…

  13. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  14. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  15. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  16. Optimization and validation of FePro cell labeling method.

    PubMed

    Janic, Branislava; Rad, Ali M; Jordan, Elaine K; Iskander, A S M; Ali, Md M; Varma, N Ravi S; Frank, Joseph A; Arbab, Ali S

    2009-06-11

    Current method to magnetically label cells using ferumoxides (Fe)-protamine (Pro) sulfate (FePro) is based on generating FePro complexes in a serum free media that are then incubated overnight with cells for the efficient labeling. However, this labeling technique requires long (>12-16 hours) incubation time and uses relatively high dose of Pro (5-6 microg/ml) that makes large extracellular FePro complexes. These complexes can be difficult to clean with simple cell washes and may create low signal intensity on T2* weighted MRI that is not desirable. The purpose of this study was to revise the current labeling method by using low dose of Pro and adding Fe and Pro directly to the cells before generating any FePro complexes. Human tumor glioma (U251) and human monocytic leukemia cell (THP-1) lines were used as model systems for attached and suspension cell types, respectively and dose dependent (Fe 25 to 100 microg/ml and Pro 0.75 to 3 microg/ml) and time dependent (2 to 48 h) labeling experiments were performed. Labeling efficiency and cell viability of these cells were assessed. Prussian blue staining revealed that more than 95% of cells were labeled. Intracellular iron concentration in U251 cells reached approximately 30-35 pg-iron/cell at 24 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. However, comparable labeling was observed after 4 h across the described FePro concentrations. Similarly, THP-1 cells achieved approximately 10 pg-iron/cell at 48 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. Again, comparable labeling was observed after 4 h for the described FePro concentrations. FePro labeling did not significantly affect cell viability. There was almost no extracellular FePro complexes observed after simple cell washes. To validate and to determine the effectiveness of the revised technique, human T-cells, human hematopoietic stem cells (hHSC), human bone marrow stromal cells (hMSC) and mouse neuronal stem cells (mNSC C17.2) were labeled. Labeling for 4 hours using 100 microg/ml of Fe and 3 microg/ml of Pro resulted in very efficient labeling of these cells, without impairing their viability and functional capability. The new technique with short incubation time using 100 microg/ml of Fe and 3 microg/ml of Pro is effective in labeling cells for cellular MRI.

  17. Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.

    PubMed

    Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca

    2017-12-01

    In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.

  18. The role of short-term memory impairment in nonword repetition, real word repetition, and nonword decoding: A case study.

    PubMed

    Peter, Beate

    2018-01-01

    In a companion study, adults with dyslexia and adults with a probable history of childhood apraxia of speech showed evidence of difficulty with processing sequential information during nonword repetition, multisyllabic real word repetition and nonword decoding. Results suggested that some errors arose in visual encoding during nonword reading, all levels of processing but especially short-term memory storage/retrieval during nonword repetition, and motor planning and programming during complex real word repetition. To further investigate the role of short-term memory, a participant with short-term memory impairment (MI) was recruited. MI was confirmed with poor performance during a sentence repetition and three nonword repetition tasks, all of which have a high short-term memory load, whereas typical performance was observed during tests of reading, spelling, and static verbal knowledge, all with low short-term memory loads. Experimental results show error-free performance during multisyllabic real word repetition but high counts of sequence errors, especially migrations and assimilations, during nonword repetition, supporting short-term memory as a locus of sequential processing deficit during nonword repetition. Results are also consistent with the hypothesis that during complex real word repetition, short-term memory is bypassed as the word is recognized and retrieved from long-term memory prior to producing the word.

  19. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  20. Multiformat decoder for a DSP-based IP set-top box

    NASA Astrophysics Data System (ADS)

    Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.

    2007-05-01

    Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.

Top