An Efficient Rank Based Approach for Closest String and Closest Substring
2012-01-01
This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Sparse Substring Pattern Set Discovery Using Linear Programming Boosting
NASA Astrophysics Data System (ADS)
Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki
In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
Decoding DNA labels by melting curve analysis using real-time PCR.
Balog, József A; Fehér, Liliána Z; Puskás, László G
2017-12-01
Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.
Goz, Eli; Zafrir, Zohar; Tuller, Tamir
2018-04-30
Understanding how viruses co-evolve with their hosts and adapt various genomic level strategies in order to ensure their fitness may have essential implications in unveiling the secrets of viral evolution, and in developing new vaccines and therapeutic approaches. Here, based on a novel genomic analysis of 2,625 different viruses and 439 corresponding host organisms, we provide evidence of universal evolutionary selection for high dimensional 'silent' patterns of information hidden in the redundancy of viral genetic code. Our model suggests that long substrings of nucleotides in the coding regions of viruses from all classes, often also repeat in the corresponding viral hosts from all domains of life. Selection for these substrings cannot be explained only by such phenomena as codon usage bias, horizontal gene transfer, and the encoded proteins. Genes encoding structural proteins responsible for building the core of the viral particles were found to include more host-repeating substrings, and these substrings tend to appear in the middle parts of the viral coding regions. In addition, in human viruses these substrings tend to be enriched with motives related to transcription factors and RNA binding proteins. The host-repeating substrings are possibly related to the evolutionary pressure on the viruses to effectively interact with host's intracellular factors and to efficiently escape from the host's immune system. tamirtul@post.tau.ac.il (TT). Supplementary data are available at Bioinformatics online.
Performance of Lempel-Ziv compressors with deferred innovation
NASA Technical Reports Server (NTRS)
Cohn, Martin
1989-01-01
The noiseless data-compression algorithms introduced by Lempel and Ziv (LZ) parse an input data string into successive substrings each consisting of two parts: The citation, which is the longest prefix that has appeared earlier in the input, and the innovation, which is the symbol immediately following the citation. In extremal versions of the LZ algorithm the citation may have begun anywhere in the input; in incremental versions it must have begun at a previous parse position. Originally the citation and the innovation were encoded, either individually or jointly, into an output word to be transmitted or stored. Subsequently, it was speculated that the cost of this encoding may be excessively high because the innovation contributes roughly 1g(A) bits, where A is the size of the input alphabet, regardless of the compressibility of the source. To remedy this excess, it was suggested to store the parsed substring as usual, but encoding for output only the citation, leaving the innovation to be encoded as the first symbol of the next substring. Being thus included in the next substring, the innovation can participate in whatever compression that substring enjoys. This strategy is called deferred innovation. It is exemplified in the algorithm described by Welch and implemented in the C program compress that has widely displaced adaptive Huffman coding (compact) as a UNIX system utility. The excessive expansion is explained, an implicit warning is given against using the deferred innovation compressors on nearly incompressible data.
Decoding of quantum dots encoded microbeads using a hyperspectral fluorescence imaging method.
Liu, Yixi; Liu, Le; He, Yonghong; Zhu, Liang; Ma, Hui
2015-05-19
We presented a decoding method of quantum dots encoded microbeads with its fluorescence spectra using line scan hyperspectral fluorescence imaging (HFI) method. A HFI method was developed to attain both the spectra of fluorescence signal and the spatial information of the encoded microbeads. A decoding scheme was adopted to decode the spectra of multicolor microbeads acquired by the HFI system. Comparison experiments between the HFI system and the flow cytometer were conducted. The results showed that the HFI system has higher spectrum resolution; thus, more channels in spectral dimension can be used. The HFI system detection and decoding experiment with the single-stranded DNA (ssDNA) immobilized multicolor beads was done, and the result showed the efficiency of the HFI system. Surface modification of the microbeads by use of the polydopamine was characterized by the scanning electron microscopy and ssDNA immobilization was characterized by the laser confocal microscope. These results indicate that the designed HFI system can be applied to practical biological and medical applications.
Analysis of correlation structures in the Synechocystis PCC6803 genome.
Wu, Zuo-Bing
2014-12-01
Transfer of nucleotide strings in the Synechocystis sp. PCC6803 genome is investigated to exhibit periodic and non-periodic correlation structures by using the recurrence plot method and the phase space reconstruction technique. The periodic correlation structures are generated by periodic transfer of several substrings in long periodic or non-periodic nucleotide strings embedded in the coding regions of genes. The non-periodic correlation structures are generated by non-periodic transfer of several substrings covering or overlapping with the coding regions of genes. In the periodic and non-periodic transfer, some gaps divide the long nucleotide strings into the substrings and prevent their global transfer. Most of the gaps are either the replacement of one base or the insertion/reduction of one base. In the reconstructed phase space, the points generated from two or three steps for the continuous iterative transfer via the second maximal distance can be fitted by two lines. It partly reveals an intrinsic dynamics in the transfer of nucleotide strings. Due to the comparison of the relative positions and lengths, the substrings concerned with the non-periodic correlation structures are almost identical to the mobile elements annotated in the genome. The mobile elements are thus endowed with the basic results on the correlation structures. Copyright © 2014 Elsevier Ltd. All rights reserved.
2016-03-01
in this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of...gravity, or pretest . 1 Approved for public release; distribution is unlimited. Fine Location 2 Code position 9–10: This substring represents the spacial...itself. For example, upper, pretest , or Hybrid III mid-sized male ATD. Physical dimension Code position 13–14: This substring represents the type of the
An emergence of coordinated communication in populations of agents.
Kvasnicka, V; Pospichal, J
1999-01-01
The purpose of this article is to demonstrate that coordinated communication spontaneously emerges in a population composed of agents that are capable of specific cognitive activities. Internal states of agents are characterized by meaning vectors. Simple neural networks composed of one layer of hidden neurons perform cognitive activities of agents. An elementary communication act consists of the following: (a) two agents are selected, where one of them is declared the speaker and the other the listener; (b) the speaker codes a selected meaning vector onto a sequence of symbols and sends it to the listener as a message; and finally, (c) the listener decodes this message into a meaning vector and adapts his or her neural network such that the differences between speaker and listener meaning vectors are decreased. A Darwinian evolution enlarged by ideas from the Baldwin effect and Dawkins' memes is simulated by a simple version of an evolutionary algorithm without crossover. The agent fitness is determined by success of the mutual pairwise communications. It is demonstrated that agents in the course of evolution gradually do a better job of decoding received messages (they are closer to meaning vectors of speakers) and all agents gradually start to use the same vocabulary for the common communication. Moreover, if agent meaning vectors contain regularities, then these regularities are manifested also in messages created by agent speakers, that is, similar parts of meaning vectors are coded by similar symbol substrings. This observation is considered a manifestation of the emergence of a grammar system in the common coordinated communication.
Human Genome Research: Decoding DNA
instructions for making all the protein molecules for all the different kinds of cells of the human body dropdown arrow Site Map A-Z Index Menu Synopsis Human Genome Research: Decoding DNA Resources with DeLisi played a pivotal role in proposing and initiating the Human Genome Program in 1986. The U.S
Ebbie: automated analysis and storage of small RNA cloning data using a dynamic web server
Ebhardt, H Alexander; Wiese, Kay C; Unrau, Peter J
2006-01-01
Background DNA sequencing is used ubiquitously: from deciphering genomes[1] to determining the primary sequence of small RNAs (smRNAs) [2-5]. The cloning of smRNAs is currently the most conventional method to determine the actual sequence of these important regulators of gene expression. Typical smRNA cloning projects involve the sequencing of hundreds to thousands of smRNA clones that are delimited at their 5' and 3' ends by fixed sequence regions. These primers result from the biochemical protocol used to isolate and convert the smRNA into clonable PCR products. Recently we completed a smRNA cloning project involving tobacco plants, where analysis was required for ~700 smRNA sequences[6]. Finding no easily accessible research tool to enter and analyze smRNA sequences we developed Ebbie to assist us with our study. Results Ebbie is a semi-automated smRNA cloning data processing algorithm, which initially searches for any substring within a DNA sequencing text file, which is flanked by two constant strings. The substring, also termed smRNA or insert, is stored in a MySQL and BlastN database. These inserts are then compared using BlastN to locally installed databases allowing the rapid comparison of the insert to both the growing smRNA database and to other static sequence databases. Our laboratory used Ebbie to analyze scores of DNA sequencing data originating from an smRNA cloning project[6]. Through its built-in instant analysis of all inserts using BlastN, we were able to quickly identify 33 groups of smRNAs from ~700 database entries. This clustering allowed the easy identification of novel and highly expressed clusters of smRNAs. Ebbie is available under GNU GPL and currently implemented on Conclusion Ebbie was designed for medium sized smRNA cloning projects with about 1,000 database entries [6-8].Ebbie can be used for any type of sequence analysis where two constant primer regions flank a sequence of interest. The reliable storage of inserts, and their annotation in a MySQL database, BlastN[9] comparison of new inserts to dynamic and static databases make it a powerful new tool in any laboratory using DNA sequencing. Ebbie also prevents manual mistakes during the excision process and speeds up annotation and data-entry. Once the server is installed locally, its access can be restricted to protect sensitive new DNA sequencing data. Ebbie was primarily designed for smRNA cloning projects, but can be applied to a variety of RNA and DNA cloning projects[2,3,10,11]. PMID:16584563
A novel chaotic image encryption scheme using DNA sequence operations
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Zhang, Ying-Qian; Bao, Xue-Mei
2015-10-01
In this paper, we propose a novel image encryption scheme based on DNA (Deoxyribonucleic acid) sequence operations and chaotic system. Firstly, we perform bitwise exclusive OR operation on the pixels of the plain image using the pseudorandom sequences produced by the spatiotemporal chaos system, i.e., CML (coupled map lattice). Secondly, a DNA matrix is obtained by encoding the confused image using a kind of DNA encoding rule. Then we generate the new initial conditions of the CML according to this DNA matrix and the previous initial conditions, which can make the encryption result closely depend on every pixel of the plain image. Thirdly, the rows and columns of the DNA matrix are permuted. Then, the permuted DNA matrix is confused once again. At last, after decoding the confused DNA matrix using a kind of DNA decoding rule, we obtain the ciphered image. Experimental results and theoretical analysis show that the scheme is able to resist various attacks, so it has extraordinarily high security.
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
Transition state-finding strategies for use with the growing string method.
Goodrow, Anthony; Bell, Alexis T; Head-Gordon, Martin
2009-06-28
Efficient identification of transition states is important for understanding reaction mechanisms. Most transition state search algorithms require long computational times and a good estimate of the transition state structure in order to converge, particularly for complex reaction systems. The growing string method (GSM) [B. Peters et al., J. Chem. Phys. 120, 7877 (2004)] does not require an initial guess of the transition state; however, the calculation is still computationally intensive due to repeated calls to the quantum mechanics code. Recent modifications to the GSM [A. Goodrow et al., J. Chem. Phys. 129, 174109 (2008)] have reduced the total computational time for converging to a transition state by a factor of 2 to 3. In this work, three transition state-finding strategies have been developed to complement the speedup of the modified-GSM: (1) a hybrid strategy, (2) an energy-weighted strategy, and (3) a substring strategy. The hybrid strategy initiates the string calculation at a low level of theory (HF/STO-3G), which is then refined at a higher level of theory (B3LYP/6-31G(*)). The energy-weighted strategy spaces points along the reaction pathway based on the energy at those points, leading to a higher density of points where the energy is highest and finer resolution of the transition state. The substring strategy is similar to the hybrid strategy, but only a portion of the low-level string is refined using a higher level of theory. These three strategies have been used with the modified-GSM and are compared in three reactions: alanine dipeptide isomerization, H-abstraction in methanol oxidation on VO(x)/SiO(2) catalysts, and C-H bond activation in the oxidative carbonylation of toluene to p-toluic acid on Rh(CO)(2)(TFA)(3) catalysts. In each of these examples, the substring strategy was proved most effective by obtaining a better estimate of the transition state structure and reducing the total computational time by a factor of 2 to 3 compared to the modified-GSM. The applicability of the substring strategy has been extended to three additional examples: cyclopropane rearrangement to propylene, isomerization of methylcyclopropane to four different stereoisomers, and the bimolecular Diels-Alder condensation of 1,3-butadiene and ethylene to cyclohexene. Thus, the substring strategy used in combination with the modified-GSM has been demonstrated to be an efficient transition state-finding strategy for a wide range of types of reactions.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.
Ferreira, Miguel; Roma, Nuno; Russo, Luis M S
2014-05-30
HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.
Applying Agrep to r-NSA to solve multiple sequences approximate matching.
Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak
2014-01-01
This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.
Unified View of Backward Backtracking in Short Read Mapping
NASA Astrophysics Data System (ADS)
Mäkinen, Veli; Välimäki, Niko; Laaksonen, Antti; Katainen, Riku
Mapping short DNA reads to the reference genome is the core task in the recent high-throughput technologies to study e.g. protein-DNA interactions (ChIP-seq) and alternative splicing (RNA-seq). Several tools for the task (bowtie, bwa, SOAP2, TopHat) have been developed that exploit Burrows-Wheeler transform and the backward backtracking technique on it, to map the reads to their best approximate occurrences in the genome. These tools use different tailored mechanisms for small error-levels to prune the search phase significantly. We propose a new pruning mechanism that can be seen a generalization of the tailored mechanisms used so far. It uses a novel idea of storing all cyclic rotations of fixed length substrings of the reference sequence with a compressed index that is able to exploit the repetitions created to level out the growth of the input set. For RNA-seq we propose a new method that combines dynamic programming with backtracking to map efficiently and correctly all reads that span two exons. Same mechanism can also be used for mapping mate-pair reads.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER
2014-01-01
Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826
A novel chaos-based image encryption algorithm using DNA sequence operations
NASA Astrophysics Data System (ADS)
Chai, Xiuli; Chen, Yiran; Broyde, Lucie
2017-01-01
An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.
WASP7 BENTHIC ALGAE - MODEL THEORY AND USER'S GUIDE
The standard WASP7 eutrophication module includes nitrogen and phosphorus cycling, dissolved oxygen-organic matter interactions, and phytoplankton kinetics. In many shallow streams and rivers, however, the attached algae (benthic algae, or periphyton, attached to submerged substr...
Decoding DNA, RNA and peptides with quantum tunnelling
NASA Astrophysics Data System (ADS)
di Ventra, Massimiliano; Taniguchi, Masateru
2016-02-01
Drugs and treatments could be precisely tailored to an individual patient by extracting their cellular- and molecular-level information. For this approach to be feasible on a global scale, however, information on complete genomes (DNA), transcriptomes (RNA) and proteomes (all proteins) needs to be obtained quickly and at low cost. Quantum mechanical phenomena could potentially be of value here, because the biological information needs to be decoded at an atomic level and quantum tunnelling has recently been shown to be able to differentiate single nucleobases and amino acids in short sequences. Here, we review the different approaches to using quantum tunnelling for sequencing, highlighting the theoretical background to the method and the experimental capabilities demonstrated to date. We also explore the potential advantages of the approach and the technical challenges that must be addressed to deliver practical quantum sequencing devices.
Prevalence of Mitochondrial 12S rRNA Mutations Associated with Aminoglycoside Ototoxicity
ERIC Educational Resources Information Center
Guan, Min-Xin
2005-01-01
The mitochondrial DNA (mtDNA) 12S rRNA is a hot spot for mutations associated with both aminoglycoside-induced and nonsyndromic hearing loss. Of those, the homoplasmic A1555G and C1494T mutations at a highly conserved decoding region of the 12S rRNA have been associated with hearing loss. These two mutations account for a significant number of…
Brody, Thomas; Yavatkar, Amarendra S; Kuzin, Alexander; Kundu, Mukta; Tyson, Leonard J; Ross, Jermaine; Lin, Tzu-Yang; Lee, Chi-Hon; Awasaki, Takeshi; Lee, Tzumin; Odenwald, Ward F
2012-01-01
Background: Phylogenetic footprinting has revealed that cis-regulatory enhancers consist of conserved DNA sequence clusters (CSCs). Currently, there is no systematic approach for enhancer discovery and analysis that takes full-advantage of the sequence information within enhancer CSCs. Results: We have generated a Drosophila genome-wide database of conserved DNA consisting of >100,000 CSCs derived from EvoPrints spanning over 90% of the genome. cis-Decoder database search and alignment algorithms enable the discovery of functionally related enhancers. The program first identifies conserved repeat elements within an input enhancer and then searches the database for CSCs that score highly against the input CSC. Scoring is based on shared repeats as well as uniquely shared matches, and includes measures of the balance of shared elements, a diagnostic that has proven to be useful in predicting cis-regulatory function. To demonstrate the utility of these tools, a temporally-restricted CNS neuroblast enhancer was used to identify other functionally related enhancers and analyze their structural organization. Conclusions: cis-Decoder reveals that co-regulating enhancers consist of combinations of overlapping shared sequence elements, providing insights into the mode of integration of multiple regulating transcription factors. The database and accompanying algorithms should prove useful in the discovery and analysis of enhancers involved in any developmental process. Developmental Dynamics 241:169–189, 2012. © 2011 Wiley Periodicals, Inc. Key findings A genome-wide catalog of Drosophila conserved DNA sequence clusters. cis-Decoder discovers functionally related enhancers. Functionally related enhancers share balanced sequence element copy numbers. Many enhancers function during multiple phases of development. PMID:22174086
Profiling cellular protein complexes by proximity ligation with dual tag microarray readout.
Hammond, Maria; Nong, Rachel Yuan; Ericsson, Olle; Pardali, Katerina; Landegren, Ulf
2012-01-01
Patterns of protein interactions provide important insights in basic biology, and their analysis plays an increasing role in drug development and diagnostics of disease. We have established a scalable technique to compare two biological samples for the levels of all pairwise interactions among a set of targeted protein molecules. The technique is a combination of the proximity ligation assay with readout via dual tag microarrays. In the proximity ligation assay protein identities are encoded as DNA sequences by attaching DNA oligonucleotides to antibodies directed against the proteins of interest. Upon binding by pairs of antibodies to proteins present in the same molecular complexes, ligation reactions give rise to reporter DNA molecules that contain the combined sequence information from the two DNA strands. The ligation reactions also serve to incorporate a sample barcode in the reporter molecules to allow for direct comparison between pairs of samples. The samples are evaluated using a dual tag microarray where information is decoded, revealing which pairs of tags that have become joined. As a proof-of-concept we demonstrate that this approach can be used to detect a set of five proteins and their pairwise interactions both in cellular lysates and in fixed tissue culture cells. This paper provides a general strategy to analyze the extent of any pairwise interactions in large sets of molecules by decoding reporter DNA strands that identify the interacting molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapidus, Alla L.
From the date its role in heredity was discovered, DNA has been generating interest among scientists from different fields of knowledge: physicists have studied the three dimensional structure of the DNA molecule, biologists tried to decode the secrets of life hidden within these long molecules, and technologists invent and improve methods of DNA analysis. The analysis of the nucleotide sequence of DNA occupies a special place among the methods developed. Thanks to the variety of sequencing technologies available, the process of decoding the sequence of genomic DNA (or whole genome sequencing) has become robust and inexpensive. Meanwhile the assembly ofmore » whole genome sequences remains a challenging task. In addition to the need to assemble millions of DNA fragments of different length (from 35 bp (Solexa) to 800 bp (Sanger)), great interest in analysis of microbial communities (metagenomes) of different complexities raises new problems and pushes some new requirements for sequence assembly tools to the forefront. The genome assembly process can be divided into two steps: draft assembly and assembly improvement (finishing). Despite the fact that automatically performed assembly (or draft assembly) is capable of covering up to 98% of the genome, in most cases, it still contains incorrectly assembled reads. The error rate of the consensus sequence produced at this stage is about 1/2000 bp. A finished genome represents the genome assembly of much higher accuracy (with no gaps or incorrectly assembled areas) and quality ({approx}1 error/10,000 bp), validated through a number of computer and laboratory experiments.« less
Nitrite intensity explains N management effects on N2O emissions in maize
USDA-ARS?s Scientific Manuscript database
It is typically assumed that the dependence of nitrous oxide (N2O) emissions on soil nitrogen (N) availability is best quantified in terms of ammonium (NH4+) and/or nitrate (NO3-) concentrations. In contrast, nitrite (NO2-) is seldom measured separately from NO3- despite its role as a central substr...
DNA Base-Calling from a Nanopore Using a Viterbi Algorithm
Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei
2012-01-01
Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395
Stephen Baylin, M.D., Explains Genetics and Epigenetics - TCGA
Stephen Baylin, M.D., at the Johns Hopkins Kimmel Cancer Center discusses the how alterations in the DNA code are deciphered in a combined effort with The Cancer Genome Atlas at the National Cancer Institute to decode the brain cancer genome.
Marck, Christian; Kachouri-Lafond, Rym; Lafontaine, Ingrid; Westhof, Eric; Dujon, Bernard; Grosjean, Henri
2006-01-01
We present the first comprehensive analysis of RNA polymerase III (Pol III) transcribed genes in ten yeast genomes. This set includes all tRNA genes (tDNA) and genes coding for SNR6 (U6), SNR52, SCR1 and RPR1 RNA in the nine hemiascomycetes Saccharomyces cerevisiae, Saccharomyces castellii, Candida glabrata, Kluyveromyces waltii, Kluyveromyces lactis, Eremothecium gossypii, Debaryomyces hansenii, Candida albicans, Yarrowia lipolytica and the archiascomycete Schizosaccharomyces pombe. We systematically analysed sequence specificities of tRNA genes, polymorphism, variability of introns, gene redundancy and gene clustering. Analysis of decoding strategies showed that yeasts close to S.cerevisiae use bacterial decoding rules to read the Leu CUN and Arg CGN codons, in contrast to all other known Eukaryotes. In D.hansenii and C.albicans, we identified a novel tDNA-Leu (AAG), reading the Leu CUU/CUC/CUA codons with an unusual G at position 32. A systematic ‘p-distance tree’ using the 60 variable positions of the tRNA molecule revealed that most tDNAs cluster into amino acid-specific sub-trees, suggesting that, within hemiascomycetes, orthologous tDNAs are more closely related than paralogs. We finally determined the bipartite A- and B-box sequences recognized by TFIIIC. These minimal sequences are nearly conserved throughout hemiascomycetes and were satisfactorily retrieved at appropriate locations in other Pol III genes. PMID:16600899
DNA base-calling from a nanopore using a Viterbi algorithm.
Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei
2012-05-16
Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Transcription factors as readers and effectors of DNA methylation.
Zhu, Heng; Wang, Guohua; Qian, Jiang
2016-08-01
Recent technological advances have made it possible to decode DNA methylomes at single-base-pair resolution under various physiological conditions. Many aberrant or differentially methylated sites have been discovered, but the mechanisms by which changes in DNA methylation lead to observed phenotypes, such as cancer, remain elusive. The classical view of methylation-mediated protein-DNA interactions is that only proteins with a methyl-CpG binding domain (MBD) can interact with methylated DNA. However, evidence is emerging to suggest that transcription factors lacking a MBD can also interact with methylated DNA. The identification of these proteins and the elucidation of their characteristics and the biological consequences of methylation-dependent transcription factor-DNA interactions are important stepping stones towards a mechanistic understanding of methylation-mediated biological processes, which have crucial implications for human development and disease.
Transcription factors as readers and effectors of DNA methylation
Zhu, Heng; Wang, Guohua; Qian, Jiang
2017-01-01
Recent technological advances have made it possible to decode DNA methylomes at single-base-pair resolution under various physiological conditions. Many aberrant or differentially methylated sites have been discovered, but the mechanisms by which changes in DNA methylation lead to observed phenotypes, such as cancer, remain elusive. The classical view of methylation-mediated protein-DNA interactions is that only proteins with a methyl-CpG binding domain (MBD) can interact with methylated DNA. However, evidence is emerging to suggest that transcription factors lacking a MBD can also interact with methylated DNA. The identification of these proteins and the elucidation of their characteristics and the biological consequences of methylation-dependent transcription factor-DNA interactions are important stepping stones towards a mechanistic understanding of methylation-mediated biological processes, which have crucial implications for human development and disease. PMID:27479905
Projector Center: Replication, Transcription, and Translation.
ERIC Educational Resources Information Center
Ruth, Edward B.
1984-01-01
Describes the use of a chart that systematically summarizes three basic steps that involve DNA and its decoding in both eukaryotic and prokaryotic cells: replication; transcription, and translation. Indicates that the chart (mounted on a tranparency) does an adequate job of conveying basic information about nucleic acids to students. (DH)
M. Thompson Conkle
1986-01-01
Check the laboratory reports after your next physical. You'll find information on a number of biochemical processes. Procedures like those used in the medical sciences are yielding valuable information about genetic differences among trees and tree pests. New procedures that provide ways to isolate and move genes are advancing progress in tree improvement. These...
2013-09-01
SbBS512_E4084 Shigella byodii /EC NC101 ND ND ND EC: E. coli ND: not determined 8 Table 2. Common Strain-Unique Proteins from Replicate...E24377A- Escherichia coli str. K-12 substr. MG1655- Escherichia coli SE11- Escherichia coli- W3110 Shigella boy dii CDC 3083-94- Shigella boy dii Sb227
Decoding molecular interactions in microbial communities
Abreu, Nicole A.; Taga, Michiko E.
2016-01-01
Microbial communities govern numerous fundamental processes on earth. Discovering and tracking molecular interactions among microbes is critical for understanding how single species and complex communities impact their associated host or natural environment. While recent technological developments in DNA sequencing and functional imaging have led to new and deeper levels of understanding, we are limited now by our inability to predict and interpret the intricate relationships and interspecies dependencies within these communities. In this review, we highlight the multifaceted approaches investigators have taken within their areas of research to decode interspecies molecular interactions that occur between microbes. Understanding these principles can give us greater insight into ecological interactions in natural environments and within synthetic consortia. PMID:27417261
Stereo-Based Region-Growing using String Matching
NASA Technical Reports Server (NTRS)
Mandelbaum, Robert; Mintz, Max
1995-01-01
We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.
NASA Astrophysics Data System (ADS)
Gholibeigian, Hassan
In my vision, there are four animated sub-particles (mater, plant, animal and human sub-particles) as the origin of the life and creator of momentum in each fundamental particle (string). They communicate with dimension of information which is nested with space-time for getting a package of information in each Planck time. They are link-point between dimension of information and space-time. Sub-particle which identifies its fundamental particle, processes the package of information for finding its next step. Processed information carry always by fundamental particles as the history of the universe and enhance its entropy. My proposed formula for calculating number of packages is I =tP- 1 . τ , Planck time tP, and τ is fundamental particle's lifetime. For example a photon needs processes 1 . 8 ×1043 packages of information for finding its path in a second. Duration of each process is faster than light speed. In our bodies, human's sub-particles (substrings) communicate with dimension of information and get packages of information including standard ethics for process and finding their next step. The processed information transforms to knowledge in our mind. This knowledge is always carried by us. Knowledge, as the Result of the Processed Information by Human's Sub-particles (sub-strings)/Mind in our Brain.
A combinatorial approach to the design of vaccines.
Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M
2015-05-01
We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.
Characterization of "cis"-regulatory elements ("c"RE) associated with mammary gland function
USDA-ARS?s Scientific Manuscript database
The Bos taurus genome assembly has propelled dairy science into a new era; still, most of the information encoded in the genome has not yet been decoded. The human Encyclopedia of DNA Elements (ENCODE) project has spearheaded the identification and annotation of functional genomic elements in the hu...
Quick, Joshua; Quinlan, Aaron R; Loman, Nicholas J
2014-01-01
The MinION™ is a new, portable single-molecule sequencer developed by Oxford Nanopore Technologies. It measures four inches in length and is powered from the USB 3.0 port of a laptop computer. The MinION™ measures the change in current resulting from DNA strands interacting with a charged protein nanopore. These measurements can then be used to deduce the underlying nucleotide sequence. We present a read dataset from whole-genome shotgun sequencing of the model organism Escherichia coli K-12 substr. MG1655 generated on a MinION™ device during the early-access MinION™ Access Program (MAP). Sequencing runs of the MinION™ are presented, one generated using R7 chemistry (released in July 2014) and one using R7.3 (released in September 2014). Base-called sequence data are provided to demonstrate the nature of data produced by the MinION™ platform and to encourage the development of customised methods for alignment, consensus and variant calling, de novo assembly and scaffolding. FAST5 files containing event data within the HDF5 container format are provided to assist with the development of improved base-calling methods.
Force spectroscopy of biomolecular folding and binding: theory meets experiment
NASA Astrophysics Data System (ADS)
Dudko, Olga
2015-03-01
Conformational transitions in biological macromolecules usually serve as the mechanism that brings biomolecules into their working shape and enables their biological function. Single-molecule force spectroscopy probes conformational transitions by applying force to individual macromolecules and recording their response, or ``mechanical fingerprints,'' in the form of force-extension curves. However, how can we decode these fingerprints so that they reveal the kinetic barriers and the associated timescales of a biological process? I will present an analytical theory of the mechanical fingerprints of macromolecules. The theory is suitable for decoding such fingerprints to extract the barriers and timescales. The application of the theory will be illustrated through recent studies on protein-DNA interactions and the receptor-ligand complexes involved in blood clot formation.
Nanoscale Bio-engineering Solutions for Space Exploration: The Nanopore Sequencer
NASA Technical Reports Server (NTRS)
Stolc, Viktor; Cozmuta, Ioana
2004-01-01
Characterization of biological systems at the molecular level and extraction of essential information for nano-engineering design to guide the nano-fabrication of solid-state sensors and molecular identification devices is a computational challenge. The alpha hemolysin protein ion channel is used as a model system for structural analysis of nucleic acids like DNA. Applied voltage draws a DNA strand and surrounding ionic solution through the biological nanopore. The subunits in the DNA strand block ion flow by differing amounts. Atomistic scale simulations are employed using NASA supercomputers to study DNA translocation, with the aim to enhance single DNA subunit identification. Compared to protein channels, solid-state nanopores offer a better temporal control of the translocation of DNA and the possibility to easily tune its chemistry to increase the signal resolution. Potential applications for NASA missions, besides real-time genome sequencing include astronaut health, life detection and decoding of various genomes.
Chemical biology on the genome.
Balasubramanian, Shankar
2014-08-15
In this article I discuss studies towards understanding the structure and function of DNA in the context of genomes from the perspective of a chemist. The first area I describe concerns the studies that led to the invention and subsequent development of a method for sequencing DNA on a genome scale at high speed and low cost, now known as Solexa/Illumina sequencing. The second theme will feature the four-stranded DNA structure known as a G-quadruplex with a focus on its fundamental properties, its presence in cellular genomic DNA and the prospects for targeting such a structure in cels with small molecules. The final topic for discussion is naturally occurring chemically modified DNA bases with an emphasis on chemistry for decoding (or sequencing) such modifications in genomic DNA. The genome is a fruitful topic to be further elucidated by the creation and application of chemical approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nanoscale Bioengineering Solutions for Space Exploration the Nanopore Sequencer
NASA Technical Reports Server (NTRS)
Ioana, Cozmuta; Viktor, Stoic
2005-01-01
Characterization of biological systems at the molecular level and extraction of essential information for nano-engineering design to guide the nano-fabrication of solid-state sensors and molecular identification devices is a computational challenge. The alpha hemolysin protein ion channel is used as a model system for structural analysis of nucleic acids like DNA. Applied voltage draws a DNA strand and surrounding ionic solution through the biological nanopore. The subunits in the DNA strand block ion flow by differing amounts. Atomistic scale simulations are employed using NASA supercomputers to study DNA translocation. with the aim to enhance single DNA subunit identification. Compared to protein channels, solid-state nanopores offer a better temporal control of the translocation of DNA and the possibility to easily tune its chemistry to increase the signal resolution. Potential applications for NASA missions, besides real-time genome sequencing include astronaut health, life detection and decoding of various genomes. http://phenomrph.arc.nasa.gov/index.php
Random access in large-scale DNA data storage.
Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin
2018-03-01
Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
A Rewritable, Random-Access DNA-Based Storage System.
Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-18
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
A Rewritable, Random-Access DNA-Based Storage System
NASA Astrophysics Data System (ADS)
Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-01
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
Microfluidic DNA sample preparation method and device
Krulevitch, Peter A.; Miles, Robin R.; Wang, Xiao-Bo; Mariella, Raymond P.; Gascoyne, Peter R. C.; Balch, Joseph W.
2002-01-01
Manipulation of DNA molecules in solution has become an essential aspect of genetic analyses used for biomedical assays, the identification of hazardous bacterial agents, and in decoding the human genome. Currently, most of the steps involved in preparing a DNA sample for analysis are performed manually and are time, labor, and equipment intensive. These steps include extraction of the DNA from spores or cells, separation of the DNA from other particles and molecules in the solution (e.g. dust, smoke, cell/spore debris, and proteins), and separation of the DNA itself into strands of specific lengths. Dielectrophoresis (DEP), a phenomenon whereby polarizable particles move in response to a gradient in electric field, can be used to manipulate and separate DNA in an automated fashion, considerably reducing the time and expense involved in DNA analyses, as well as allowing for the miniaturization of DNA analysis instruments. These applications include direct transport of DNA, trapping of DNA to allow for its separation from other particles or molecules in the solution, and the separation of DNA into strands of varying lengths.
Modulation-frequency encoded multi-color fluorescent DNA analysis in an optofluidic chip.
Dongre, Chaitanya; van Weerd, Jasper; Besselink, Geert A J; Vazquez, Rebeca Martinez; Osellame, Roberto; Cerullo, Giulio; van Weeghel, Rob; van den Vlekkert, Hans H; Hoekstra, Hugo J W M; Pollnau, Markus
2011-02-21
We introduce a principle of parallel optical processing to an optofluidic lab-on-a-chip. During electrophoretic separation, the ultra-low limit of detection achieved with our set-up allows us to record fluorescence from covalently end-labeled DNA molecules. Different sets of exclusively color-labeled DNA fragments-otherwise rendered indistinguishable by spatio-temporal coincidence-are traced back to their origin by modulation-frequency-encoded multi-wavelength laser excitation, fluorescence detection with a single ultrasensitive, albeit color-blind photomultiplier, and Fourier analysis decoding. As a proof of principle, fragments obtained by multiplex ligation-dependent probe amplification from independent human genomic segments, associated with genetic predispositions to breast cancer and anemia, are simultaneously analyzed.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Nguyen, Hoang Hiep; Park, Jeho; Hwang, Seungwoo; Kwon, Oh Seok; Lee, Chang-Soo; Shin, Yong-Beom; Ha, Tai Hwan; Kim, Moonil
2018-01-10
We report the development of on-chip fluorescence switching system based on DNA strand displacement and DNA hybridization for the construction of a rewritable and randomly accessible data storage device. In this study, the feasibility and potential effectiveness of our proposed system was evaluated with a series of wet experiments involving 40 bits (5 bytes) of data encoding a 5-charactered text (KRIBB). Also, a flexible data rewriting function was achieved by converting fluorescence signals between "ON" and "OFF" through DNA strand displacement and hybridization events. In addition, the proposed system was successfully validated on a microfluidic chip which could further facilitate the encoding and decoding process of data. To the best of our knowledge, this is the first report on the use of DNA hybridization and DNA strand displacement in the field of data storage devices. Taken together, our results demonstrated that DNA-based fluorescence switching could be applicable to construct a rewritable and randomly accessible data storage device through controllable DNA manipulations.
A deformation energy-based model for predicting nucleosome dyads and occupancy
Liu, Guoqing; Xing, Yongqiang; Zhao, Hongyu; Wang, Jianying; Shang, Yu; Cai, Lu
2016-01-01
Nucleosome plays an essential role in various cellular processes, such as DNA replication, recombination, and transcription. Hence, it is important to decode the mechanism of nucleosome positioning and identify nucleosome positions in the genome. In this paper, we present a model for predicting nucleosome positioning based on DNA deformation, in which both bending and shearing of the nucleosomal DNA are considered. The model successfully predicted the dyad positions of nucleosomes assembled in vitro and the in vitro map of nucleosomes in Saccharomyces cerevisiae. Applying the model to Caenorhabditis elegans and Drosophila melanogaster, we achieved satisfactory results. Our data also show that shearing energy of nucleosomal DNA outperforms bending energy in nucleosome occupancy prediction and the ability to predict nucleosome dyad positions is attributed to bending energy that is associated with rotational positioning of nucleosomes. PMID:27053067
A novel image encryption algorithm based on the chaotic system and DNA computing
NASA Astrophysics Data System (ADS)
Chai, Xiuli; Gan, Zhihua; Lu, Yang; Chen, Yiran; Han, Daojun
A novel image encryption algorithm using the chaotic system and deoxyribonucleic acid (DNA) computing is presented. Different from the traditional encryption methods, the permutation and diffusion of our method are manipulated on the 3D DNA matrix. Firstly, a 3D DNA matrix is obtained through bit plane splitting, bit plane recombination, DNA encoding of the plain image. Secondly, 3D DNA level permutation based on position sequence group (3DDNALPBPSG) is introduced, and chaotic sequences generated from the chaotic system are employed to permutate the positions of the elements of the 3D DNA matrix. Thirdly, 3D DNA level diffusion (3DDNALD) is given, the confused 3D DNA matrix is split into sub-blocks, and XOR operation by block is manipulated to the sub-DNA matrix and the key DNA matrix from the chaotic system. At last, by decoding the diffused DNA matrix, we get the cipher image. SHA 256 hash of the plain image is employed to calculate the initial values of the chaotic system to avoid chosen plaintext attack. Experimental results and security analyses show that our scheme is secure against several known attacks, and it can effectively protect the security of the images.
Biosamples, genomics, and human rights: context and content of Iceland's Biobanks Act.
Winickoff, D E
2001-01-01
In recent years, human DNA sampling and collection has accelerated without the development of enforceable rules protecting the human rights of donors. The need for regulation of biobanking is especially acute in Iceland, whose parliament has granted a for-profit corporation, deCODE Genetics, an exclusive license to create a centralized database of health records for studies on human genetic variation. Until recently, how deCODE Genetics would get genetic material for its genotypic-phenotypic database remained unclear. However, in May 2000, the Icelandic Parliament passed the Icelandic Biobanks Act, the world's earliest attempt to construct binding rules for the use of biobanks in scientific research. Unfortunately, Iceland has lost an opportunity for bringing clear and ethically sound standards to the use of human biological samples in deCODE's database and in other projects: the Biobanks Act has extended a notion of "presumed consent" from the use of medical records to the use of patients' biological samples; worse, the act has made it possible--perhaps likely--that a donor's wish to withdraw his/her sample will be ignored. Inadequacies in the Act's legislative process help account for these deficiencies in the protection of donor autonomy.
Computer Programming Manual for the Jovial (J73) Language
1981-06-01
in function is: C - BYTE("ABCDEFŕ ,2,3); The built-in function extracts "BCD" from the string "ABCDEF". ( - 9 - 1: introduction ’ Two of the built...Tabl,’ Declarations Oil -fIt Chapter 8 BLOCK DECLARATIONS A block groups items, tables, and other blocks into contiguous storage. A block also gives a...substring to be extracted starts. Length specifies the number of bits in the subetring. Bits in a bit string are numbered from left to right, beginning with
Methods for decoding Cas9 protospacer adjacent motif (PAM) sequences: A brief overview.
Karvelis, Tautvydas; Gasiunas, Giedrius; Siksnys, Virginijus
2017-05-15
Recently the Cas9, an RNA guided DNA endonuclease, emerged as a powerful tool for targeted genome manipulations. Cas9 protein can be reprogrammed to cleave, bind or nick any DNA target by simply changing crRNA sequence, however a short nucleotide sequence, termed PAM, is required to initiate crRNA hybridization to the DNA target. PAM sequence is recognized by Cas9 protein and must be determined experimentally for each Cas9 variant. Exploration of Cas9 orthologs could offer a diversity of PAM sequences and novel biochemical properties that may be beneficial for genome editing applications. Here we briefly review and compare Cas9 PAM identification assays that can be adopted for other PAM-dependent CRISPR-Cas systems. Copyright © 2017 Elsevier Inc. All rights reserved.
Addressable configurations of DNA nanostructures for rewritable memory
Levchenko, Oksana; Patel, Dhruv S.; MacIsaac, Molly
2017-01-01
Abstract DNA serves as nature's information storage molecule, and has been the primary focus of engineered systems for biological computing and data storage. Here we combine recent efforts in DNA self-assembly and toehold-mediated strand displacement to develop a rewritable multi-bit DNA memory system. The system operates by encoding information in distinct and reversible conformations of a DNA nanoswitch and decoding by gel electrophoresis. We demonstrate a 5-bit system capable of writing, erasing, and rewriting binary representations of alphanumeric symbols, as well as compatibility with ‘OR’ and ‘AND’ logic operations. Our strategy is simple to implement, requiring only a single mixing step at room temperature for each operation and standard gel electrophoresis to read the data. We envision such systems could find use in covert product labeling and barcoding, as well as secure messaging and authentication when combined with previously developed encryption strategies. Ultimately, this type of memory has exciting potential in biomedical sciences as data storage can be coupled to sensing of biological molecules. PMID:28977499
Decoding the genome beyond sequencing: the new phase of genomic research.
Heng, Henry H Q; Liu, Guo; Stevens, Joshua B; Bremer, Steven W; Ye, Karen J; Abdallah, Batoul Y; Horne, Steven D; Ye, Christine J
2011-10-01
While our understanding of gene-based biology has greatly improved, it is clear that the function of the genome and most diseases cannot be fully explained by genes and other regulatory elements. Genes and the genome represent distinct levels of genetic organization with their own coding systems; Genes code parts like protein and RNA, but the genome codes the structure of genetic networks, which are defined by the whole set of genes, chromosomes and their topological interactions within a cell. Accordingly, the genetic code of DNA offers limited understanding of genome functions. In this perspective, we introduce the genome theory which calls for the departure of gene-centric genomic research. To make this transition for the next phase of genomic research, it is essential to acknowledge the importance of new genome-based biological concepts and to establish new technology platforms to decode the genome beyond sequencing. Copyright © 2011 Elsevier Inc. All rights reserved.
Lonardi, Stefano; Mirebrahim, Hamid; Wanamaker, Steve; Alpert, Matthew; Ciardo, Gianfranco; Duma, Denisa; Close, Timothy J
2015-09-15
As the invention of DNA sequencing in the 70s, computational biologists have had to deal with the problem of de novo genome assembly with limited (or insufficient) depth of sequencing. In this work, we investigate the opposite problem, that is, the challenge of dealing with excessive depth of sequencing. We explore the effect of ultra-deep sequencing data in two domains: (i) the problem of decoding reads to bacterial artificial chromosome (BAC) clones (in the context of the combinatorial pooling design we have recently proposed), and (ii) the problem of de novo assembly of BAC clones. Using real ultra-deep sequencing data, we show that when the depth of sequencing increases over a certain threshold, sequencing errors make these two problems harder and harder (instead of easier, as one would expect with error-free data), and as a consequence the quality of the solution degrades with more and more data. For the first problem, we propose an effective solution based on 'divide and conquer': we 'slice' a large dataset into smaller samples of optimal size, decode each slice independently, and then merge the results. Experimental results on over 15 000 barley BACs and over 4000 cowpea BACs demonstrate a significant improvement in the quality of the decoding and the final assembly. For the second problem, we show for the first time that modern de novo assemblers cannot take advantage of ultra-deep sequencing data. Python scripts to process slices and resolve decoding conflicts are available from http://goo.gl/YXgdHT; software Hashfilter can be downloaded from http://goo.gl/MIyZHs stelo@cs.ucr.edu or timothy.close@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions
Tataru, Paula; Sand, Andreas; Hobolth, Asger; Mailund, Thomas; Pedersen, Christian N. S.
2013-01-01
Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from this distribution. We show experimentally that the expectation of the distribution of the number of pattern occurrences gives a highly accurate estimate, while the typical procedure can be biased in the sense that the identified number of pattern occurrences does not correspond to the true number. We furthermore show that using this distribution in the decoding algorithms improves the predictive power of the model. PMID:24833225
Zhang, Xu; Wang, Fengshan; Sheng, Juzheng
2016-06-16
Heparan sulfate (HS) is widely distributed in mammalian tissues in the form of HS proteoglycans, which play essential roles in various physiological and pathological processes. In contrast to the template-guided processes involved in the synthesis of DNA and proteins, HS biosynthesis is not believed to involve a template. However, it appears that the final structure of HS chains was strictly regulated. Herein, we report research based hypothesis that two major steps, namely "coding" and "decoding" steps, are involved in the biosynthesis of HS, which strictly regulate its chemical structure and biological activity. The "coding" process in this context is based on the distribution of sulfate moieties on the amino groups of the glucosamine residues in the HS chains. The sulfation of these amine groups is catalyzed by N-deacetylase/N-sulfotransferase, which has four isozymes. The composition and distribution of sulfate groups and iduronic acid residues on the glycan chains of HS are determined by several other modification enzymes, which can recognize these coding sequences (i.e., the "decoding" process). The degree and pattern of the sulfation and epimerization in the HS chains determines the extent of their interactions with several different protein factors, which further influences their biological activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1995-01-01
The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.
Addressable configurations of DNA nanostructures for rewritable memory.
Chandrasekaran, Arun Richard; Levchenko, Oksana; Patel, Dhruv S; MacIsaac, Molly; Halvorsen, Ken
2017-11-02
DNA serves as nature's information storage molecule, and has been the primary focus of engineered systems for biological computing and data storage. Here we combine recent efforts in DNA self-assembly and toehold-mediated strand displacement to develop a rewritable multi-bit DNA memory system. The system operates by encoding information in distinct and reversible conformations of a DNA nanoswitch and decoding by gel electrophoresis. We demonstrate a 5-bit system capable of writing, erasing, and rewriting binary representations of alphanumeric symbols, as well as compatibility with 'OR' and 'AND' logic operations. Our strategy is simple to implement, requiring only a single mixing step at room temperature for each operation and standard gel electrophoresis to read the data. We envision such systems could find use in covert product labeling and barcoding, as well as secure messaging and authentication when combined with previously developed encryption strategies. Ultimately, this type of memory has exciting potential in biomedical sciences as data storage can be coupled to sensing of biological molecules. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Low RNA translation activit limits the efficacy of hydrodynamic gene transfer to pig liver in vivo.
Sendra, Luis; Carreño, Omar; Miguel, Antonio; Montalvá, Eva; Herrero, María José; Orbis, Francisco; Noguera, Inmaculada; Barettino, Domingo; López-Andújar, Rafael; Aliño, Salvador F
2014-01-01
Hydrodynamic gene delivery has proved an efficient strategy for nonviral gene therapy in the murine liver but it has been less efficient in pigs. The reason for such inefficiency remains unclear. The present study used a surgical strategy to seal the whole pig liver in vivo. A solution of enhanced green fluorescent protein (eGFP) DNA was injected under two different venous injection conditions (anterograde and retrograde), employing flow rates of 10 and 20 ml/s in each case, with the aim of identifying the best gene transfer conditions. The gene delivery and information decoding steps were evaluated by measuring the eGFP DNA, mRNA and protein copy number 24 h after transfection. In addition, gold nanoparticles (diameters of 4 and 15 nm) were retrogradely injected (10 ml/s) to observe, by electron microscopy, the ability of the particle to access the hepatocyte. The gene delivery level was higher with anterograde injection, whereas the efficacy of gene expression was better with retrograde injection, suggesting differences in the decoding processes. Thus, retrograde injection mediates gene transcription (mRNA copy/cell) equivalent to that of intermediate expression proteins but the mRNA translation was lower than that of rare proteins. Electron microscopy showed that nanoparticles within the hepatocyte were almost exclusively 4 nm in diameter. The results suggest that the low activity of mRNA translation limits the final efficacy of the gene transfer procedure. On the other hand, the gold nanoparticles study suggests that elongated DNA conformation could offer advantages in that the access of 15-nm particles is very limited. Copyright © 2014 John Wiley & Sons, Ltd.
Submicrometre geometrically encoded fluorescent barcodes self-assembled from DNA
NASA Astrophysics Data System (ADS)
Lin, Chenxiang; Jungmann, Ralf; Leifer, Andrew M.; Li, Chao; Levner, Daniel; Church, George M.; Shih, William M.; Yin, Peng
2012-10-01
The identification and differentiation of a large number of distinct molecular species with high temporal and spatial resolution is a major challenge in biomedical science. Fluorescence microscopy is a powerful tool, but its multiplexing ability is limited by the number of spectrally distinguishable fluorophores. Here, we used (deoxy)ribonucleic acid (DNA)-origami technology to construct submicrometre nanorods that act as fluorescent barcodes. We demonstrate that spatial control over the positioning of fluorophores on the surface of a stiff DNA nanorod can produce 216 distinct barcodes that can be decoded unambiguously using epifluorescence or total internal reflection fluorescence microscopy. Barcodes with higher spatial information density were demonstrated via the construction of super-resolution barcodes with features spaced by ˜40 nm. One species of the barcodes was used to tag yeast surface receptors, which suggests their potential applications as in situ imaging probes for diverse biomolecular and cellular entities in their native environments.
Sub-micrometer Geometrically Encoded Fluorescent Barcodes Self-Assembled from DNA
Lin, Chenxiang; Jungmann, Ralf; Leifer, Andrew M.; Li, Chao; Levner, Daniel; Church, George M.; Shih, William M.; Yin, Peng
2012-01-01
The identification and differentiation of a large number of distinct molecular species with high temporal and spatial resolution is a major challenge in biomedical science. Fluorescence microscopy is a powerful tool, but its multiplexing ability is limited by the number of spectrally distinguishable fluorophores. Here we use DNA-origami technology to construct sub-micrometer nanorods that act as fluorescent barcodes. We demonstrate that spatial control over the positioning of fluorophores on the surface of a stiff DNA nanorod can produce 216 distinct barcodes that can be unambiguously decoded using epifluorescence or total internal reflection fluorescence (TIRF) microscopy. Barcodes with higher spatial information density were demonstrated via the construction of super-resolution barcodes with features spaced by ~40 nm. One species of the barcodes was used to tag yeast surface receptors, suggesting their potential applications as in situ imaging probes for diverse biomolecular and cellular entities in their native environments. PMID:23000997
NASA Technical Reports Server (NTRS)
Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.
2012-01-01
A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.
DNAApp: a mobile application for sequencing data analysis
Nguyen, Phi-Vu; Verma, Chandra Shekhar; Gan, Samuel Ken-En
2014-01-01
Summary: There have been numerous applications developed for decoding and visualization of ab1 DNA sequencing files for Windows and MAC platforms, yet none exists for the increasingly popular smartphone operating systems. The ability to decode sequencing files cannot easily be carried out using browser accessed Web tools. To overcome this hurdle, we have developed a new native app called DNAApp that can decode and display ab1 sequencing file on Android and iOS. In addition to in-built analysis tools such as reverse complementation, protein translation and searching for specific sequences, we have incorporated convenient functions that would facilitate the harnessing of online Web tools for a full range of analysis. Given the high usage of Android/iOS tablets and smartphones, such bioinformatics apps would raise productivity and facilitate the high demand for analyzing sequencing data in biomedical research. Availability and implementation: The Android version of DNAApp is available in Google Play Store as ‘DNAApp’, and the iOS version is available in the App Store. More details on the app can be found at www.facebook.com/APDLab; www.bii.a-star.edu.sg/research/trd/apd.php The DNAApp user guide is available at http://tinyurl.com/DNAAppuser, and a video tutorial is available on Google Play Store and App Store, as well as on the Facebook page. Contact: samuelg@bii.a-star.edu.sg PMID:25095882
DNAApp: a mobile application for sequencing data analysis.
Nguyen, Phi-Vu; Verma, Chandra Shekhar; Gan, Samuel Ken-En
2014-11-15
There have been numerous applications developed for decoding and visualization of ab1 DNA sequencing files for Windows and MAC platforms, yet none exists for the increasingly popular smartphone operating systems. The ability to decode sequencing files cannot easily be carried out using browser accessed Web tools. To overcome this hurdle, we have developed a new native app called DNAApp that can decode and display ab1 sequencing file on Android and iOS. In addition to in-built analysis tools such as reverse complementation, protein translation and searching for specific sequences, we have incorporated convenient functions that would facilitate the harnessing of online Web tools for a full range of analysis. Given the high usage of Android/iOS tablets and smartphones, such bioinformatics apps would raise productivity and facilitate the high demand for analyzing sequencing data in biomedical research. The Android version of DNAApp is available in Google Play Store as 'DNAApp', and the iOS version is available in the App Store. More details on the app can be found at www.facebook.com/APDLab; www.bii.a-star.edu.sg/research/trd/apd.php The DNAApp user guide is available at http://tinyurl.com/DNAAppuser, and a video tutorial is available on Google Play Store and App Store, as well as on the Facebook page. samuelg@bii.a-star.edu.sg. © The Author 2014. Published by Oxford University Press.
A novel parallel pipeline structure of VP9 decoder
NASA Astrophysics Data System (ADS)
Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan
2018-04-01
To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Design and synthesis of digitally encoded polymers that can be decoded and erased
NASA Astrophysics Data System (ADS)
Roy, Raj Kumar; Meszynska, Anna; Laure, Chloé; Charles, Laurence; Verchin, Claire; Lutz, Jean-François
2015-05-01
Biopolymers such as DNA store information in their chains using controlled sequences of monomers. Here we describe a non-natural information-containing macromolecule that can store and retrieve digital information. Monodisperse sequence-encoded poly(alkoxyamine amide)s were synthesized using an iterative strategy employing two chemoselective steps: the reaction of a primary amine with an acid anhydride and the radical coupling of a carbon-centred radical with a nitroxide. A binary code was implemented in the polymer chains using three monomers: one nitroxide spacer and two interchangeable anhydrides defined as 0-bit and 1-bit. This methodology allows encryption of any desired sequence in the chains. Moreover, the formed sequences are easy to decode using tandem mass spectrometry. Indeed, these polymers follow predictable fragmentation pathways that can be easily deciphered. Moreover, poly(alkoxyamine amide)s are thermolabile. Thus, the digital information encrypted in the chains can be erased by heating the polymers in the solid state or in solution.
Design and synthesis of digitally encoded polymers that can be decoded and erased.
Roy, Raj Kumar; Meszynska, Anna; Laure, Chloé; Charles, Laurence; Verchin, Claire; Lutz, Jean-François
2015-05-26
Biopolymers such as DNA store information in their chains using controlled sequences of monomers. Here we describe a non-natural information-containing macromolecule that can store and retrieve digital information. Monodisperse sequence-encoded poly(alkoxyamine amide)s were synthesized using an iterative strategy employing two chemoselective steps: the reaction of a primary amine with an acid anhydride and the radical coupling of a carbon-centred radical with a nitroxide. A binary code was implemented in the polymer chains using three monomers: one nitroxide spacer and two interchangeable anhydrides defined as 0-bit and 1-bit. This methodology allows encryption of any desired sequence in the chains. Moreover, the formed sequences are easy to decode using tandem mass spectrometry. Indeed, these polymers follow predictable fragmentation pathways that can be easily deciphered. Moreover, poly(alkoxyamine amide)s are thermolabile. Thus, the digital information encrypted in the chains can be erased by heating the polymers in the solid state or in solution.
1980-06-01
some basiL ideas for a strategy to overcome these constraints. The proposed strategy includes prioritizing the Spokane Street Bridge and form- ing a... pesticides , PCB’s and other toxic substr.ces. The source of this pollution can be traced, in paxt, to combined and storm overflows and accidental spills...regarding toxicants in the river was reviewed. Data for PCB’s, pesticides , metals, and oil and grease are patchy. in all cases reported, the levels of
Overview of BioCreative II gene mention recognition.
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions.
Overview of BioCreative II gene mention recognition
Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John
2008-01-01
Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions. PMID:18834493
Safety Performance of Small Lithium-Ion Cells in High Voltage Batteries
NASA Technical Reports Server (NTRS)
Cowles, Philip R.; Darcy, Eric C.; Davies, Frank J.; Jeevarajan, Judith A.; Spurrett, Robert P.
2003-01-01
Topics covered include: Small-cell EAPU work done by NASA-JSC & COM DEV; Looking at safety features (short circuit protection - PTCs); Early tests showed that long strings do not withstand short circuit; a) Some PTCs experience large negative voltages; b) Destructive results. Solution: group cells into shorter substrings, with bypass diodes Work included: a) Tests with single cells shorted; b) Tests with single cells with imposed-negative voltages; c) 6s, 7s and 8s string shorts; and d) Tests with protection scheme in place, on 12s and 41s x 5p.
Essentials of Conservation Biotechnology: A mini review
NASA Astrophysics Data System (ADS)
Merlyn Keziah, S.; Subathra Devi, C.
2017-11-01
Equilibrium of biodiversity is essential for the maintenance of the ecosystem as they are interdependent on each other. The decline in biodiversity is a global problem and an inevitable threat to the mankind. Major threats include unsustainable exploitation, habitat destruction, fragmentation, transformation, genetic pollution, invasive exotic species and degradation. This review covers the management strategies of biotechnology which include sin situ, ex situ conservation, computerized taxonomic analysis through construction of phylogenetic trees, calculating genetic distance, prioritizing the group for conservation, digital preservation of biodiversities within the coding and decoding keys, molecular approaches to asses biodiversity like polymerase chain reaction, real time, randomly amplified polymorphic DNA, restriction fragment length polymorphism, amplified fragment length polymorphism, single sequence repeats, DNA finger printing, single nucleotide polymorphism, cryopreservation and vitrification.
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.
Decoding Gene Patents in Australia
Denley, Adam; Cherry, James
2015-01-01
Patents directed to naturally occurring genetic material, such as DNA, RNA, chromosomes, and genes, in an isolated or purified form have been granted in Australia for many years. This review provides scientists with a summary of the gene patent debate from an Australian perspective and specifically reviews how the various levels of the legal system as they apply to patents—the Australian Patent Office, Australian courts, and Australian government—have dealt with the issue of whether genetic material is proper subject matter for a patent. PMID:25280901
Architecture for time or transform domain decoding of reed-solomon codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)
1989-01-01
Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Shao, H. M.; Deutsch, L. J.
1987-01-01
The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
HIA: a genome mapper using hybrid index-based sequence alignment.
Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen
2015-01-01
A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
A long constraint length VLSI Viterbi decoder for the DSN
NASA Technical Reports Server (NTRS)
Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.
1988-01-01
A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
Xie, Qing; Shen, Kang-Ning; Hao, Xiuying; Nam, Phan Nhut; Ngoc Hieu, Bui Thi; Chen, Ching-Hung; Zhu, Changqing; Lin, Yen-Chang; Hsiao, Chung-Der
2017-03-01
abtract We decoded the complete chloroplast DNA (cpDNA) sequence of the Tianshan Snow Lotus (Saussurea involucrata), a famous traditional Chinese medicinal plant of the family Asteraceae, by using next-generation sequencing technology. The genome consists of 152 490 bp containing a pair of inverted repeats (IRs) of 25 202 bp, which was separated by a large single-copy region and a small single-copy region of 83 446 bp and 18 639 bp, respectively. The genic regions account for 57.7% of whole cpDNA, and the GC content of the cpDNA was 37.7%. The S. involucrata cpDNA encodes 114 unigenes (82 protein-coding genes, 4 rRNA genes, and 28 tRNA genes). There are eight protein-coding genes (atpF, ndhA, ndhB, rpl2, rpoC1, rps16, clpP, and ycf3) and five tRNA genes (trnA-UGC, trnI-GAU, trnK-UUU, trnL-UAA, and trnV-UAC) containing introns. A phylogenetic analysis of the 11 complete cpDNA from Asteracease showed that S. involucrata is closely related to Centaurea diffusa (Diffuse Knapweed). The complete cpDNA of S. involucrata provides essential and important DNA molecular data for further phylogenetic and evolutionary analysis for Asteraceae.
Yang, Chunrong; Zou, Dan; Chen, Jianchi; Zhang, Linyan; Miao, Jiarong; Huang, Dan; Du, Yuanyuan; Yang, Shu; Yang, Qianfan; Tang, Yalin
2018-03-15
Plenty of molecular circuits with specific functions have been developed; however, logic units with reconfigurability, which could simplify the circuits and speed up the information process, are rarely reported. In this work, we designed a novel reconfigurable logic unit based on a DNA-templated, potassium-concentration-dependent, supramolecular assembly, which could respond to the input stimuli of H + and K + . By inputting different concentrations of K + , the logic unit could implement three significant functions, including a half adder, a half subtractor, and a 2-to-4 decoder. Considering its reconfigurable ability and good performance, the novel prototypes developed here may serve as a promising proof of principle in molecular computers. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multiplexed DNA detection using spectrally encoded porous SiO2 photonic crystal particles.
Meade, Shawn O; Chen, Michelle Y; Sailor, Michael J; Miskelly, Gordon M
2009-04-01
A particle-based multiplexed DNA assay based on encoded porous SiO(2) photonic crystal disks is demonstrated. A "spectral barcode" is generated by electrochemical etch of a single-crystal silicon wafer using a programmed current-time waveform. A lithographic procedure is used to isolate cylindrical microparticles 25 microm in diameter and 10 microm thick, which are then oxidized, modified with a silane linker, and conjugated to various amino-functionalized oligonucleotide probes via cyanuric chloride. It is shown that the particles can be decoded based on their reflectivity spectra and that a multiple analyte assay can be performed in a single sample with a modified fluorescence microscope. The homogeneity of the reflectivity and fluorescence spectra, both within and across the microparticles, is also reported.
High-coverage methylation data of a gene model before and after DNA damage and homologous repair.
Pezone, Antonio; Russo, Giusi; Tramontano, Alfonso; Florio, Ermanno; Scala, Giovanni; Landi, Rosaria; Zuchegna, Candida; Romano, Antonella; Chiariotti, Lorenzo; Muller, Mark T; Gottesman, Max E; Porcellini, Antonio; Avvedimento, Enrico V
2017-04-11
Genome-wide methylation analysis is limited by its low coverage and the inability to detect single variants below 10%. Quantitative analysis provides accurate information on the extent of methylation of single CpG dinucleotide, but it does not measure the actual polymorphism of the methylation profiles of single molecules. To understand the polymorphism of DNA methylation and to decode the methylation signatures before and after DNA damage and repair, we have deep sequenced in bisulfite-treated DNA a reporter gene undergoing site-specific DNA damage and homologous repair. In this paper, we provide information on the data generation, the rationale for the experiments and the type of assays used, such as cytofluorimetry and immunoblot data derived during a previous work published in Scientific Reports, describing the methylation and expression changes of a model gene (GFP) before and after formation of a double-strand break and repair by homologous-recombination or non-homologous-end-joining. These data provide: 1) a reference for the analysis of methylation polymorphism at selected loci in complex cell populations; 2) a platform and the tools to compare transcription and methylation profiles.
High-coverage methylation data of a gene model before and after DNA damage and homologous repair
Pezone, Antonio; Russo, Giusi; Tramontano, Alfonso; Florio, Ermanno; Scala, Giovanni; Landi, Rosaria; Zuchegna, Candida; Romano, Antonella; Chiariotti, Lorenzo; Muller, Mark T.; Gottesman, Max E.; Porcellini, Antonio; Avvedimento, Enrico V.
2017-01-01
Genome-wide methylation analysis is limited by its low coverage and the inability to detect single variants below 10%. Quantitative analysis provides accurate information on the extent of methylation of single CpG dinucleotide, but it does not measure the actual polymorphism of the methylation profiles of single molecules. To understand the polymorphism of DNA methylation and to decode the methylation signatures before and after DNA damage and repair, we have deep sequenced in bisulfite-treated DNA a reporter gene undergoing site-specific DNA damage and homologous repair. In this paper, we provide information on the data generation, the rationale for the experiments and the type of assays used, such as cytofluorimetry and immunoblot data derived during a previous work published in Scientific Reports, describing the methylation and expression changes of a model gene (GFP) before and after formation of a double-strand break and repair by homologous-recombination or non-homologous-end-joining. These data provide: 1) a reference for the analysis of methylation polymorphism at selected loci in complex cell populations; 2) a platform and the tools to compare transcription and methylation profiles. PMID:28398335
Mollazadeh, Mohsen; Davidson, Adam G.; Schieber, Marc H.; Thakor, Nitish V.
2013-01-01
The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation. PMID:23536714
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Bayesian decoding using unsorted spikes in the rat hippocampus
Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... decoders manufactured after August 1, 2003 must provide a means to permit the selective display and logging... upgrade their decoders on an optional basis to include a selective display and logging capability for EAS... decoders after February 1, 2004 must install decoders that provide a means to permit the selective display...
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Reverendo, Marisa; Soares, Ana R; Pereira, Patrícia M; Carreto, Laura; Ferreira, Violeta; Gatti, Evelina; Pierre, Philippe; Moura, Gabriela R; Santos, Manuel A
2014-01-01
Mutations in genes that encode tRNAs, aminoacyl-tRNA syntheases, tRNA modifying enzymes and other tRNA interacting partners are associated with neuropathies, cancer, type-II diabetes and hearing loss, but how these mutations cause disease is unclear. We have hypothesized that levels of tRNA decoding error (mistranslation) that do not fully impair embryonic development can accelerate cell degeneration through proteome instability and saturation of the proteostasis network. To test this hypothesis we have induced mistranslation in zebrafish embryos using mutant tRNAs that misincorporate Serine (Ser) at various non-cognate codon sites. Embryo viability was affected and malformations were observed, but a significant proportion of embryos survived by activating the unfolded protein response (UPR), the ubiquitin proteasome pathway (UPP) and downregulating protein biosynthesis. Accumulation of reactive oxygen species (ROS), mitochondrial and nuclear DNA damage and disruption of the mitochondrial network, were also observed, suggesting that mistranslation had a strong negative impact on protein synthesis rate, ER and mitochondrial homeostasis. We postulate that mistranslation promotes gradual cellular degeneration and disease through protein aggregation, mitochondrial dysfunction and genome instability. PMID:25483040
Iacovelli, Federico; Falconi, Mattia
2015-09-01
DNA and RNA are large and flexible polymers selected by nature to transmit information. The most common DNA three-dimensional structure is represented by the double helix, but this biopolymer is extremely flexible and polymorphic, and can easily change its conformation to adapt to different interactions and purposes. DNA can also adopt singular topologies, giving rise, for instance, to supercoils, formed because of the limited free rotation of the DNA domain flanking a replication or transcription complex. Our understanding of the importance of these unusual or transient structures is growing, as recent studies of DNA topology, supercoiling, knotting and linking have shown that the geometric changes can drive, or strongly influence, the interactions between protein and DNA, so altering its own metabolism. On the other hand, the unique self-recognition properties of DNA, determined by the strict Watson-Crick rules of base pairing, make this material ideal for the creation of self-assembling, predesigned nanostructures. The construction of such structures is one of the main focuses of the thriving area of DNA nanotechnology, where several assembly strategies have been employed to build increasingly complex DNA nanostructures. DNA nanodevices can have direct applications in biomedicine, but also in the materials science field, requiring the immersion of DNA in an environment far from the physiological one. Crucial help in the understanding and planning of natural and artificial nanostructures is given by modern computer simulation techniques, which are able to provide a reliable structural and dynamic description of nucleic acids. © 2015 FEBS.
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Efficient Decoding of Compressed Data.
ERIC Educational Resources Information Center
Bassiouni, Mostafa A.; Mukherjee, Amar
1995-01-01
Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)
A new VLSI architecture for a single-chip-type Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.
1989-01-01
A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.
Deconstructing multivariate decoding for the study of brain function.
Hebart, Martin N; Baker, Chris I
2017-08-04
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
Systolic VLSI Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.
1986-01-01
Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
A test of the role of the medial temporal lobe in single-word decoding.
Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph
2011-01-15
The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. Copyright © 2010 Elsevier Inc. All rights reserved.
A Test of the Role of the Medial Temporal Lobe in Single-Word Decoding
Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph
2012-01-01
The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus, nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. PMID:20884357
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
NASA Astrophysics Data System (ADS)
Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav
2018-04-01
Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.
A Scalable Architecture of a Structured LDPC Decoder
NASA Technical Reports Server (NTRS)
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
Visual perception as retrospective Bayesian decoding from high- to low-level features
Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning
2017-01-01
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108
Bypassing bacterial infection in phage display by sequencing DNA released from phage particles.
Villequey, Camille; Kong, Xu-Dong; Heinis, Christian
2017-11-01
Phage display relies on a bacterial infection step in which the phage particles are replicated to perform multiple affinity selection rounds and to enable the identification of isolated clones by DNA sequencing. While this process is efficient for wild-type phage, the bacterial infection rate of phage with mutant or chemically modified coat proteins can be low. For example, a phage mutant with a disulfide-free p3 coat protein, used for the selection of bicyclic peptides, has a more than 100-fold reduced infection rate compared to the wild-type. A potential strategy for bypassing the bacterial infection step is to directly sequence DNA extracted from phage particles after a single round of phage panning using high-throughput sequencing. In this work, we have quantified the fraction of phage clones that can be identified by directly sequencing DNA from phage particles. The results show that the DNA of essentially all of the phage particles can be 'decoded', and that the sequence coverage for mutants equals that of amplified DNA extracted from cells infected with wild-type phage. This procedure is particularly attractive for selections with phage that have a compromised infection capacity, and it may allow phage display to be performed with particles that are not infective at all. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Simultaneous real-time monitoring of multiple cortical systems.
Gupta, Disha; Jeremy Hill, N; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L; Schalk, Gerwin
2014-10-01
Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic.
Simultaneous Real-Time Monitoring of Multiple Cortical Systems
Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin
2014-01-01
Objective Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor, or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach We study these questions using electrocorticographic (ECoG) signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (6 for offline parameter optimization, 6 for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelope. These decoders were trained separately and executed simultaneously in real time. Significance This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic. PMID:25080161
The ribosome as an optimal decoder: a lesson in molecular recognition.
Savir, Yonatan; Tlusty, Tsvi
2013-04-11
The ribosome is a complex molecular machine that, in order to synthesize proteins, has to decode mRNAs by pairing their codons with matching tRNAs. Decoding is a major determinant of fitness and requires accurate and fast selection of correct tRNAs among many similar competitors. However, it is unclear whether the modern ribosome, and in particular its large conformational changes during decoding, are the outcome of adaptation to its task as a decoder or the result of other constraints. Here, we derive the energy landscape that provides optimal discrimination between competing substrates and thereby optimal tRNA decoding. We show that the measured landscape of the prokaryotic ribosome is sculpted in this way. This model suggests that conformational changes of the ribosome and tRNA during decoding are means to obtain an optimal decoder. Our analysis puts forward a generic mechanism that may be utilized broadly by molecular recognition systems. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A
2017-04-01
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
NASA Astrophysics Data System (ADS)
Mirkovic, Bojana; Debener, Stefan; Jaeger, Manuela; De Vos, Maarten
2015-08-01
Objective. Recent studies have provided evidence that temporal envelope driven speech decoding from high-density electroencephalography (EEG) and magnetoencephalography recordings can identify the attended speech stream in a multi-speaker scenario. The present work replicated the previous high density EEG study and investigated the necessary technical requirements for practical attended speech decoding with EEG. Approach. Twelve normal hearing participants attended to one out of two simultaneously presented audiobook stories, while high density EEG was recorded. An offline iterative procedure eliminating those channels contributing the least to decoding provided insight into the necessary channel number and optimal cross-subject channel configuration. Aiming towards the future goal of near real-time classification with an individually trained decoder, the minimum duration of training data necessary for successful classification was determined by using a chronological cross-validation approach. Main results. Close replication of the previously reported results confirmed the method robustness. Decoder performance remained stable from 96 channels down to 25. Furthermore, for less than 15 min of training data, the subject-independent (pre-trained) decoder performed better than an individually trained decoder did. Significance. Our study complements previous research and provides information suggesting that efficient low-density EEG online decoding is within reach.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Decoding Facial Expressions: A New Test with Decoding Norms.
ERIC Educational Resources Information Center
Leathers, Dale G.; Emigh, Ted H.
1980-01-01
Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)
Edge-Related Activity Is Not Necessary to Explain Orientation Decoding in Human Visual Cortex.
Wardle, Susan G; Ritchie, J Brendan; Seymour, Kiley; Carlson, Thomas A
2017-02-01
Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether "edge-related activity" underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding. Copyright © 2017 the authors 0270-6474/17/371187-10$15.00/0.
Tail Biting Trellis Representation of Codes: Decoding and Construction
NASA Technical Reports Server (NTRS)
Shao. Rose Y.; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.
Visual perception as retrospective Bayesian decoding from high- to low-level features.
Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning
2017-10-24
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.
Decoding and Encoding Facial Expressions in Preschool-Age Children.
ERIC Educational Resources Information Center
Zuckerman, Miron; Przewuzman, Sylvia J.
1979-01-01
Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…
Multichannel error correction code decoder
NASA Technical Reports Server (NTRS)
Wagner, Paul K.; Ivancic, William D.
1993-01-01
A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.
A software simulation study of a (255,223) Reed-Solomon encoder-decoder
NASA Technical Reports Server (NTRS)
Pollara, F.
1985-01-01
A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
High data rate Reed-Solomon encoding and decoding using VLSI technology
NASA Technical Reports Server (NTRS)
Miller, Warner; Morakis, James
1987-01-01
Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.
The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?
Maloney, Ryan T
2015-01-01
Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.
Emotion Decoding and Incidental Processing Fluency as Antecedents of Attitude Certainty.
Petrocelli, John V; Whitmire, Melanie B
2017-07-01
Previous research demonstrates that attitude certainty influences the degree to which an attitude changes in response to persuasive appeals. In the current research, decoding emotions from facial expressions and incidental processing fluency, during attitude formation, are examined as antecedents of both attitude certainty and attitude change. In Experiment 1, participants who decoded anger or happiness during attitude formation expressed their greater attitude certainty, and showed more resistance to persuasion than participants who decoded sadness. By manipulating the emotion decoded, the diagnosticity of processing fluency experienced during emotion decoding, and the gaze direction of the social targets, Experiment 2 suggests that the link between emotion decoding and attitude certainty results from incidental processing fluency. Experiment 3 demonstrated that fluency in processing irrelevant stimuli influences attitude certainty, which in turn influences resistance to persuasion. Implications for appraisal-based accounts of attitude formation and attitude change are discussed.
Deep Learning Methods for Improved Decoding of Linear Codes
NASA Astrophysics Data System (ADS)
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Decoding Children's Expressions of Affect.
ERIC Educational Resources Information Center
Feinman, Joel A.; Feldman, Robert S.
Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…
Decoding Area Studies and Interdisciplinary Majors: Building a Framework for Entry-Level Students
ERIC Educational Resources Information Center
MacPherson, Kristina Ruth
2015-01-01
Decoding disciplinary expertise for novices is increasingly part of the undergraduate curriculum. But how might area studies and other interdisciplinary programs, which require integration of courses from multiple disciplines, decode expertise in a similar fashion? Additionally, as a part of decoding area studies and interdisciplines, how might a…
47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...
47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
Soltani, Amanallah; Roslan, Samsilah
2013-03-01
Reading decoding ability is a fundamental skill to acquire word-specific orthographic information necessary for skilled reading. Decoding ability and its underlying phonological processing skills have been heavily investigated typically among developing students. However, the issue has rarely been noticed among students with intellectual disability who commonly suffer from reading decoding problems. This study is aimed at determining the contributions of phonological awareness, phonological short-term memory, and rapid automated naming, as three well known phonological processing skills, to decoding ability among 60 participants with mild intellectual disability of unspecified origin ranging from 15 to 23 years old. The results of the correlation analysis revealed that all three aspects of phonological processing are significantly correlated with decoding ability. Furthermore, a series of hierarchical regression analysis indicated that after controlling the effect of IQ, phonological awareness, and rapid automated naming are two distinct sources of decoding ability, but phonological short-term memory significantly contributes to decoding ability under the realm of phonological awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.
Walking tree heuristics for biological string alignment, gene location, and phylogenies
NASA Astrophysics Data System (ADS)
Cull, P.; Holloway, J. L.; Cavener, J. D.
1999-03-01
Basic biological information is stored in strings of nucleic acids (DNA, RNA) or amino acids (proteins). Teasing out the meaning of these strings is a central problem of modern biology. Matching and aligning strings brings out their shared characteristics. Although string matching is well-understood in the edit-distance model, biological strings with transpositions and inversions violate this model's assumptions. We propose a family of heuristics called walking trees to align biologically reasonable strings. Both edit-distance and walking tree methods can locate specific genes within a large string when the genes' sequences are given. When we attempt to match whole strings, the walking tree matches most genes, while the edit-distance method fails. We also give examples in which the walking tree matches substrings even if they have been moved or inverted. The edit-distance method was not designed to handle these problems. We include an example in which the walking tree "discovered" a gene. Calculating scores for whole genome matches gives a method for approximating evolutionary distance. We show two evolutionary trees for the picornaviruses which were computed by the walking tree heuristic. Both of these trees show great similarity to previously constructed trees. The point of this demonstration is that WHOLE genomes can be matched and distances calculated. The first tree was created on a Sequent parallel computer and demonstrates that the walking tree heuristic can be efficiently parallelized. The second tree was created using a network of work stations and demonstrates that there is suffient parallelism in the phylogenetic tree calculation that the sequential walking tree can be used effectively on a network.
Grasp movement decoding from premotor and parietal cortex.
Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg
2011-10-05
Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
NASA Technical Reports Server (NTRS)
Lahmeyer, Charles R. (Inventor)
1987-01-01
A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
Locating and decoding barcodes in fuzzy images captured by smart phones
NASA Astrophysics Data System (ADS)
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Discovery of the "RNA continent" through a contrarian's research strategy.
Hayashizaki, Yoshihide
2011-01-01
The International Human Genome Sequencing Consortium completed the decoding of the human genome sequence in 2003. Readers will be aware of the paradigm shift which has occurred since then in the field of life science research. At last, mankind has been able to focus on a complete picture of the full extent of the genome, on which is recorded the basic information that controls all life. Meanwhile, another genome project, centered on Japan and known as the mouse genome encyclopedia project, was progressing with participation from around the world. Led by our research group at RIKEN, it was a full-length cDNA project which aimed to decode the whole RNA (transcriptome) using the mouse as a model. The basic information that controls all life is recorded on the genome, but in order to obtain a complete picture of this extensive information, the decoding of the genome alone is far from sufficient. These two genome projects established that the number of letters in the genome, which is the blueprint of life, is finite, that the number of RNA molecules derived from it is also finite, and that the number of protein molecules derived from the RNA is probably finite too. A massive number of combinations is still involved, but we are now able to understand one section of the network formed by these data. Once an object of study has been understood to be finite, establishing an image of the whole is certain to lead us to an understanding of the whole. Omics is an approach that views the information controlling life as finite and seeks to assemble and analyze it as a whole. Here, I would like to present our transcriptome research while making reference to our unique research strategy.
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
NASA Astrophysics Data System (ADS)
Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka
2012-06-01
Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Fast transform decoding of nonsystematic Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.
1989-01-01
A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.
ERIC Educational Resources Information Center
Squires, Katie Ellen
2013-01-01
This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…
Polar Coding with CRC-Aided List Decoding
2015-08-01
TECHNICAL REPORT 2087 August 2015 Polar Coding with CRC-Aided List Decoding David Wasserman Approved...list decoding . RESULTS Our simulation results show that polar coding can produce results very similar to the FEC used in the Digital Video...standard. RECOMMENDATIONS In any application for which the DVB-S2 FEC is considered, polar coding with CRC-aided list decod - ing with N = 65536
Decoding position, velocity, or goal: does it matter for brain-machine interfaces?
Marathe, A R; Taylor, D M
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Decoding position, velocity, or goal: Does it matter for brain-machine interfaces?
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Improved HDRG decoders for qudit and non-Abelian quantum error correction
NASA Astrophysics Data System (ADS)
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
NASA Astrophysics Data System (ADS)
Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun
2014-07-01
A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.
Update on Development of 360V, 28kWh Lithium-Ion Battery
NASA Technical Reports Server (NTRS)
Davies, Francis; Darcy, Eric; Cowles, Phil; Irlbeck, Brad; Weintritt, John
2005-01-01
Engineering unit submodule batteries (EUSB) the 360V, 28kWh EAPU battery were designed and assembled by COM DEV. These submodules consist of Sony Li-Ion 18650HC cells in a 5P-41S array yielding 180V, 1.4 kWh. Tests of these and of substrings and single cells at COM DEV and at JSC under various performance and abuse conditions demonstrated that performance requirements can be met. The thermal vacuum tests demonstrated that the worst case hot condition is the design driver. Deficiencies in the initial diode protection scheme of the battery were identified as a result of test failures. Potential solutions to the scheme are under development and will be presented.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert
2013-01-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294
Jones, Michael N.
2017-01-01
A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A; Plomin, Robert
2012-08-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years.
Soft-output decoding algorithms in iterative decoding of turbo codes
NASA Technical Reports Server (NTRS)
Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.
1996-01-01
In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
Yanagi, Tomohiro; Shirasawa, Kenta; Terachi, Mayuko; Isobe, Sachiko
2017-01-01
Cultivated strawberry ( Fragaria × ananassa Duch.) has homoeologous chromosomes because of allo-octoploidy. For example, two homoeologous chromosomes that belong to different sub-genome of allopolyploids have similar base sequences. Thus, when conducting de novo assembly of DNA sequences, it is difficult to determine whether these sequences are derived from the same chromosome. To avoid the difficulties associated with homoeologous chromosomes and demonstrate the possibility of sequencing allopolyploids using single chromosomes, we conducted sequence analysis using microdissected single somatic chromosomes of cultivated strawberry. Three hundred and ten somatic chromosomes of the Japanese octoploid strawberry 'Reiko' were individually selected under a light microscope using a microdissection system. DNA from 288 of the dissected chromosomes was successfully amplified using a DNA amplification kit. Using next-generation sequencing, we decoded the base sequences of the amplified DNA segments, and on the basis of mapping, we identified DNA sequences from 144 samples that were best matched to the reference genomes of the octoploid strawberry, F. × ananassa , and the diploid strawberry, F. vesca . The 144 samples were classified into seven pseudo-molecules of F. vesca . The coverage rates of the DNA sequences from the single chromosome onto all pseudo-molecular sequences varied from 3 to 29.9%. We demonstrated an efficient method for sequence analysis of allopolyploid plants using microdissected single chromosomes. On the basis of our results, we believe that whole-genome analysis of allopolyploid plants can be enhanced using methodology that employs microdissected single chromosomes.
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
Coding/decoding two-dimensional images with orbital angular momentum of light.
Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping
2016-04-01
We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.
NASA Astrophysics Data System (ADS)
Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.
2016-02-01
Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
Sachs, Nicholas A; Ruiz-Torres, Ricardo; Perreault, Eric J; Miller, Lee E
2016-02-01
It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
To sort or not to sort: the impact of spike-sorting on neural decoding performance.
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
To sort or not to sort: the impact of spike-sorting on neural decoding performance
NASA Astrophysics Data System (ADS)
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
Spatial epigenetics: linking nuclear structure and function in higher eukaryotes.
Jackson, Dean A
2010-09-20
Eukaryotic cells are defined by the genetic information that is stored in their DNA. To function, this genetic information must be decoded. In doing this, the information encoded in DNA is copied first into RNA, during RNA transcription. Primary RNA transcripts are generated within transcription factories, where they are also processed into mature mRNAs, which then pass to the cytoplasm. In the cytoplasm these mRNAs can finally be translated into protein in order to express the genetic information as a functional product. With only rare exceptions, the cells of an individual multicellular eukaryote contain identical genetic information. However, as different genes must be expressed in different cell types to define the structure and function of individual tissues, it is clear that mechanisms must have evolved to regulate gene expression. In higher eukaryotes, mechanisms that regulate the interaction of DNA with the sites where nuclear functions are performed provide one such layer of regulation. In this chapter, I evaluate how a detailed understanding of nuclear structure and chromatin dynamics are beginning to reveal how spatial mechanisms link chromatin structure and function. As these mechanisms operate to modulate the genetic information in DNA, the regulation of chromatin function by nuclear architecture defines the concept of 'spatial epigenetics'.
NASA Technical Reports Server (NTRS)
Righetti, Pier Giorgio; Casale, Elena; Carter, Daniel; Snyder, Robert S.; Wenisch, Elisabeth; Faupel, Michel
1990-01-01
Recombinant-DNA (deoxyribonucleic acid) (r-DNA) proteins, produced in large quantities for human consumption, are now available in sufficient amounts for crystal growth. Crystallographic analysis is the only method now available for defining the atomic arrangements within complex biological molecules and decoding, e.g., the structure of the active site. Growing protein crystals in microgravity has become an important aspect of biology in space, since crystals that are large enough and of sufficient quality to permit complete structure determinations are usually obtained. However even small amounts of impurities in a protein preparation are anathema for the growth of a regular crystal lattice. A multicompartment electrolyzer with isoelectric, immobiline membranes, able to purify large quantities of r-DNA proteins is described. The electrolyzer consists of a stack of flow cells, delimited by membranes of very precise isoelectric point (pI, consisting of polyacrylamide supported by glass fiber filters containing Immobiline buffers and titrants to uniquely define a pI value) and very high buffering power, able to titrate all proteins tangent or crossing such membranes. By properly selecting the pI values of two membranes delimiting a flow chamber, a single protein can be kept isoelectric in a single flow chamber and thus, be purified to homogeneity (by the most stringent criterion, charge homogeneity).
Code of Federal Regulations, 2011 CFR
2011-10-01
... time periods expire. (4) Display and logging. A visual message shall be developed from any valid header... input. (8) Decoder Programming. Access to decoder programming shall be protected by a lock or other...
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Viterbi decoding for satellite and space communication.
NASA Technical Reports Server (NTRS)
Heller, J. A.; Jacobs, I. M.
1971-01-01
Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-01-01
Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.
Visual coding with a population of direction-selective neurons.
Fiscella, Michele; Franke, Felix; Farrow, Karl; Müller, Jan; Roska, Botond; da Silveira, Rava Azeredo; Hierlemann, Andreas
2015-10-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. Copyright © 2015 the American Physiological Society.
Visual coding with a population of direction-selective neurons
Farrow, Karl; Müller, Jan; Roska, Botond; Azeredo da Silveira, Rava; Hierlemann, Andreas
2015-01-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. PMID:26289471
The Human Genome Project: applications in the diagnosis and treatment of neurologic disease.
Evans, G A
1998-10-01
The Human Genome Project (HGP), an international program to decode the entire DNA sequence of the human genome in 15 years, represents the largest biological experiment ever conducted. This set of information will contain the blueprint for the construction and operation of a human being. While the primary driving force behind the genome project is the potential to vastly expand the amount of genetic information available for biomedical research, the ramifications for other fields of study in biological research, the biotechnology and pharmaceutical industry, our understanding of evolution, effects on agriculture, and implications for bioethics are likely to be profound.
NASA Astrophysics Data System (ADS)
Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen
2017-06-01
This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
Decoding the Semantic Content of Natural Movies from Human Brain Activity
Huth, Alexander G.; Lee, Tyler; Nishimoto, Shinji; Bilenko, Natalia Y.; Vu, An T.; Gallant, Jack L.
2016-01-01
One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI. PMID:27781035
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
Testing interconnected VLSI circuits in the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.
State-space decoding of primary afferent neuron firing rates
NASA Astrophysics Data System (ADS)
Wagenaar, J. B.; Ventura, V.; Weber, D. J.
2011-02-01
Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.
Utilizing sensory prediction errors for movement intention decoding: A new methodology
Nakamura, Keigo; Ando, Hideyuki
2018-01-01
We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Steacy, Laura M.; Elleman, Amy M.; Lovett, Maureen W.; Compton, Donald L.
2016-01-01
In English, gains in decoding skill do not map directly onto increases in word reading. However, beyond the Self-Teaching Hypothesis, little is known about the transfer of decoding skills to word reading. In this study, we offer a new approach to testing specific decoding elements on transfer to word reading. To illustrate, we modeled word-reading…
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
A high data rate universal lattice decoder on FPGA
NASA Astrophysics Data System (ADS)
Ma, Jing; Huang, Xinming; Kura, Swapna
2005-06-01
This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Müller-Putz, G R; Schwarz, A; Pereira, J; Ofner, P
2016-01-01
In this chapter, we give an overview of the Graz-BCI research, from the classic motor imagery detection to complex movement intentions decoding. We start by describing the classic motor imagery approach, its application in tetraplegic end users, and the significant improvements achieved using coadaptive brain-computer interfaces (BCIs). These strategies have the drawback of not mirroring the way one plans a movement. To achieve a more natural control-and to reduce the training time-the movements decoded by the BCI need to be closely related to the user's intention. Within this natural control, we focus on the kinematic level, where movement direction and hand position or velocity can be decoded from noninvasive recordings. First, we review movement execution decoding studies, where we describe the decoding algorithms, their performance, and associated features. Second, we describe the major findings in movement imagination decoding, where we emphasize the importance of estimating the sources of the discriminative features. Third, we introduce movement target decoding, which could allow the determination of the target without knowing the exact movement-by-movement details. Aside from the kinematic level, we also address the goal level, which contains relevant information on the upcoming action. Focusing on hand-object interaction and action context dependency, we discuss the possible impact of some recent neurophysiological findings in the future of BCI control. Ideally, the goal and the kinematic decoding would allow an appropriate matching of the BCI to the end users' needs, overcoming the limitations of the classic motor imagery approach. © 2016 Elsevier B.V. All rights reserved.
Multiformat decoder for a DSP-based IP set-top box
NASA Astrophysics Data System (ADS)
Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.
2007-05-01
Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.
NASA Astrophysics Data System (ADS)
Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas
2013-09-01
The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
Miniaturization of flight deflection measurement system
NASA Technical Reports Server (NTRS)
Fodale, Robert (Inventor); Hampton, Herbert R. (Inventor)
1990-01-01
A flight deflection measurement system is disclosed including a hybrid microchip of a receiver/decoder. The hybrid microchip decoder is mounted piggy back on the miniaturized receiver and forms an integral unit therewith. The flight deflection measurement system employing the miniaturized receiver/decoder can be used in a wind tunnel. In particular, the miniaturized receiver/decoder can be employed in a spin measurement system due to its small size and can retain already established control surface actuation functions.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Overview of Decoding across the Disciplines
ERIC Educational Resources Information Center
Boman, Jennifer; Currie, Genevieve; MacDonald, Ron; Miller-Young, Janice; Yeo, Michelle; Zettel, Stephanie
2017-01-01
In this chapter we describe the Decoding the Disciplines Faculty Learning Community at Mount Royal University and how Decoding has been used in new and multidisciplinary ways in the various teaching, curriculum, and research projects that are presented in detail in subsequent chapters.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
Gieß, Mario; Witte, Anna; Jasper, Julia; Koch, Oliver; Summerer, Daniel
2018-05-09
5-Methylcytosine (5mC) and its oxidized derivatives are regulatory elements of mammalian genomes involved in development and disease. These nucleobases do not selectively modulate Watson-Crick pairing, preventing their programmable targeting and analysis by traditional hybridization probes. Transcription-activator-like effectors (TALEs) can be engineered for use as programmable probes with epigenetic nucleobase selectivity. However, only partial selectivities for oxidized 5mC have been achieved so far, preventing unambiguous target binding. We overcome this limitation by destroying and re-inducing nucleobase selectivity in TALEs via protein engineering and chemoselective nucleobase blocking. We engineer cavities in TALE repeats and identify a cavity that accommodates all eight human DNA nucleobases. We then introduce substituents with varying size, flexibility, and branching degree at each oxidized 5mC. Depending on the nucleobase, substituents with distinct properties effectively block TALE-binding and induce full nucleobase selectivity in the universal repeat. Successful transfer to affinity enrichment in a human genome background indicates that this approach enables the fully selective detection of each oxidized 5mC in complex DNA by programmable probes.
NASA Astrophysics Data System (ADS)
Choi, Hoseok; Lee, Jeyeon; Park, Jinsick; Lee, Seho; Ahn, Kyoung-ha; Kim, In Young; Lee, Kyoung-Min; Jang, Dong Pyo
2018-02-01
Objective. In arm movement BCIs (brain-computer interfaces), unimanual research has been much more extensively studied than its bimanual counterpart. However, it is well known that the bimanual brain state is different from the unimanual one. Conventional methodology used in unimanual studies does not take the brain stage into consideration, and therefore appears to be insufficient for decoding bimanual movements. In this paper, we propose the use of a two-staged (effector-then-trajectory) decoder, which combines the classification of movement conditions and uses a hand trajectory predicting algorithm for unimanual and bimanual movements, for application in real-world BCIs. Approach. Two micro-electrode patches (32 channels) were inserted over the dura mater of the left and right hemispheres of two rhesus monkeys, covering the motor related cortex for epidural electrocorticograph (ECoG). Six motion sensors (inertial measurement unit) were used to record the movement signals. The monkeys performed three types of arm movement tasks: left unimanual, right unimanual, bimanual. To decode these movements, we used a two-staged decoder, which combines the effector classifier for four states (left unimanual, right unimanual, bimanual movements, and stationary state) and movement predictor using regression. Main results. Using this approach, we successfully decoded both arm positions using the proposed decoder. The results showed that decoding performance for bimanual movements were improved compared to the conventional method, which does not consider the effector, and the decoding performance was significant and stable over a period of four months. In addition, we also demonstrated the feasibility of epidural ECoG signals, which provided an adequate level of decoding accuracy. Significance. These results provide evidence that brain signals are different depending on the movement conditions or effectors. Thus, the two-staged method could be useful if BCIs are used to generalize for both unimanual and bimanual operations in human applications and in various neuro-prosthetics fields.
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Nishitsuji, Koki; Arimoto, Asuka; Iwai, Kenji; Sudo, Yusuke; Hisata, Kanako; Fujie, Manabu; Arakaki, Nana; Kushiro, Tetsuo; Konishi, Teruko; Shinzato, Chuya; Satoh, Noriyuki; Shoguchi, Eiichi
2016-12-01
The brown alga, Cladosiphon okamuranus (Okinawa mozuku), is economically one of the most important edible seaweeds, and is cultivated for market primarily in Okinawa, Japan. C. okamuranus constitutes a significant source of fucoidan, which has various physiological and biological activities. To facilitate studies of seaweed biology, we decoded the draft genome of C. okamuranus S-strain. The genome size of C. okamuranus was estimated as ∼140 Mbp, smaller than genomes of two other brown algae, Ectocarpus siliculosus and Saccharina japonica Sequencing with ∼100× coverage yielded an assembly of 541 scaffolds with N50 = 416 kbp. Together with transcriptomic data, we estimated that the C. okamuranus genome contains 13,640 protein-coding genes, approximately 94% of which have been confirmed with corresponding mRNAs. Comparisons with the E. siliculosus genome identified a set of C. okamuranus genes that encode enzymes involved in biosynthetic pathways for sulfated fucans and alginate biosynthesis. In addition, we identified C. okamuranus genes for enzymes involved in phlorotannin biosynthesis. The present decoding of the Cladosiphon okamuranus genome provides a platform for future studies of mozuku biology. © The Author 2016. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Single-Cell in Situ RNA Analysis With Switchable Fluorescent Oligonucleotides.
Xiao, Lu; Guo, Jia
2018-01-01
Comprehensive RNA analyses in individual cells in their native spatial contexts promise to transform our understanding of normal physiology and disease pathogenesis. Here we report a single-cell in situ RNA analysis approach using switchable fluorescent oligonucleotides (SFO). In this method, transcripts are first hybridized by pre-decoding oligonucleotides. These oligonucleotides subsequently recruit SFO to stain their corresponding RNA targets. After fluorescence imaging, all the SFO in the whole specimen are simultaneously removed by DNA strand displacement reactions. Through continuous cycles of target staining, fluorescence imaging, and SFO removal, a large number of different transcripts can be identified by unique fluorophore sequences and visualized at the optical resolution. To demonstrate the feasibility of this approach, we show that the hybridized SFO can be efficiently stripped by strand displacement reactions within 30 min. We also demonstrate that this SFO removal process maintains the integrity of the RNA targets and the pre-decoding oligonucleotides, and keeps them hybridized. Applying this approach, we show that transcripts can be restained in at least eight hybridization cycles with high analysis accuracy, which theoretically would enable the whole transcriptome to be quantified at the single molecule sensitivity in individual cells. This in situ RNA analysis technology will have wide applications in systems biology, molecular diagnosis, and targeted therapies.
Aldinger, Carolin A; Leisinger, Anne-Katrin; Gaston, Kirk W; Limbach, Patrick A; Igloi, Gabor L
2012-10-01
It is a prevalent concept that, in line with the Wobble Hypothesis, those tRNAs having an adenosine in the first position of the anticodon become modified to an inosine at this position. Sequencing the cDNA derived from the gene coding for cytoplasmic tRNA (Arg) ACG from several higher plants as well as mass spectrometric analysis of the isoacceptor has revealed that for this kingdom an unmodified A in the wobble position of the anticodon is the rule rather than the exception. In vitro translation shows that in the plant system the absence of inosine in the wobble position of tRNA (Arg) does not prevent decoding. This isoacceptor belongs to the class of tRNA that is imported from the cytoplasm into the mitochondria of higher plants. Previous studies on the mitochondrial tRNA pool have demonstrated the existence of tRNA (Arg) ICG in this organelle. In moss the mitochondrial encoded distinct tRNA (Arg) ACG isoacceptor possesses the I34 modification. The implication is that for mitochondrial protein biosynthesis A-to-I editing is necessary and occurs by a mitochondrion-specific deaminase after import of the unmodified nuclear encoded tRNA (Arg) ACG.
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
Influence of incident angle on the decoding in laser polarization encoding guidance
NASA Astrophysics Data System (ADS)
Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan
2009-07-01
Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.
Online decoding of object-based attention using real-time fMRI.
Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J
2014-01-01
Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Extracting duration information in a picture category decoding task using hidden Markov Models
NASA Astrophysics Data System (ADS)
Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg
2016-04-01
Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.
Fast Exact Search in Hamming Space With Multi-Index Hashing.
Norouzi, Mohammad; Punjani, Ali; Fleet, David J
2014-06-01
There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.
Schirmeier, Matthias K; Derwing, Bruce L; Libben, Gary
2004-01-01
Two types of experiments investigate the visual on-line and off-line processing of German ver-verbs (e.g., verbittern 'to embitter'). In Experiments 1 and 2 (morphological priming), latency patterns revealed the existence of facilitation effects for the morphological conditions (BITTER-VERBITTERN and BITTERN-VERBITTERN) as compared to the neutral conditions (SAUBER-VERBITTERN and SAUBERN-VERBITTERN). In Experiments 3 and 4 (rating tasks) participants had to judge whether the target (VERBITTERN) "comes from," "contains a form of," or "contains the meaning of" the root (BITTER) or the root+en substring (BITTERN). Taken together, these studies revealed the combined influence of the three factors of lexicality (real word status), morphological structure, and semantic transparency.
A Comparison of Source Code Plagiarism Detection Engines
NASA Astrophysics Data System (ADS)
Lancaster, Thomas; Culwin, Fintan
2004-06-01
Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.
Building Bridges from the Decoding Interview to Teaching Practice
ERIC Educational Resources Information Center
Pettit, Jennifer; Rathburn, Melanie; Calvert, Victoria; Lexier, Roberta; Underwood, Margot; Gleeson, Judy; Dean, Yasmin
2017-01-01
This chapter describes a multidisciplinary faculty self-study about reciprocity in service-learning. The study began with each coauthor participating in a Decoding interview. We describe how Decoding combined with collaborative self-study had a positive impact on our teaching practice.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
A high speed sequential decoder
NASA Technical Reports Server (NTRS)
Lum, H., Jr.
1972-01-01
The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.
Neural Decoder for Topological Codes
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Melko, Roger G.
2017-07-01
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.
Oya, Hiroyuki; Howard, Matthew A.; Adolphs, Ralph
2008-01-01
Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. PMID:19065268
Moraitou, Despina; Papantoniou, Georgia; Gkinopoulos, Theofilos; Nigritinou, Magdalini
2013-09-01
Although the ability to recognize emotions through bodily and facial muscular movements is vital to everyday life, numerous studies have found that older adults are less adept at identifying emotions than younger adults. The message gleaned from research has been one of greater decline in abilities to recognize specific negative emotions than positive ones. At the same time, these results raise methodological issues with regard to different modalities in which emotion decoding is measured. The main aim of the present study is to identify the pattern of age differences in the ability to decode basic emotions from naturalistic visual emotional displays. The sample comprised a total of 208 adults from Greece, aged from 18 to 86 years. Participants were examined using the Emotion Evaluation Test, which is the first part of a broader audiovisual tool, The Awareness of Social Inference Test. The Emotion Evaluation Test was designed to examine a person's ability to identify six emotions and discriminate these from neutral expressions, as portrayed dynamically by professional actors. The findings indicate that decoding of basic emotions occurs along the broad affective dimension of uncertainty, and a basic step in emotion decoding involves recognizing whether information presented is emotional or not. Age was found to negatively affect the ability to decode basic negatively valenced emotions as well as pleasant surprise. Happiness decoding is the only ability that was found well-preserved with advancing age. The main conclusion drawn from the study is that the pattern in which emotion decoding from visual cues is affected by normal ageing depends on the rate of uncertainty, which either is related to decoding difficulties or is inherent to a specific emotion. © 2013 The Authors. Psychogeriatrics © 2013 Japanese Psychogeriatric Society.
Decoding Individual Finger Movements from One Hand Using Human EEG Signals
Gonzalez, Jania; Ding, Lei
2014-01-01
Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies. PMID:24416360
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Benchmarking of Methods for Genomic Taxonomy
Larsen, Mette V.; Cosentino, Salvatore; Lukjancenko, Oksana; ...
2014-02-26
One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is—that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In this paper, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Typemore » that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. Finally, the KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.« less
A low-complexity Reed-Solomon decoder using new key equation solver
NASA Astrophysics Data System (ADS)
Xie, Jun; Yuan, Songxin; Tu, Xiaodong; Zhang, Chongfu
2006-09-01
This paper presents a low-complexity parallel Reed-Solomon (RS) (255,239) decoder architecture using a novel pipelined variable stages recursive Modified Euclidean (ME) algorithm for optical communication. The pipelined four-parallel syndrome generator is proposed. The time multiplexing and resource sharing schemes are used in the novel recursive ME algorithm to reduce the logic gate count. The new key equation solver can be shared by two decoder macro. A new Chien search cell which doesn't need initialization is proposed in the paper. The proposed decoder can be used for 2.5Gb/s data rates device. The decoder is implemented in Altera' Stratixll device. The resource utilization is reduced about 40% comparing to the conventional method.
47 CFR 79.103 - Closed caption decoder requirements for apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment. PMID:27898712
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond.
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment.
Aroudi, Ali; Doclo, Simon
2017-07-01
To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.
An Optimized Three-Level Design of Decoder Based on Nanoscale Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Seyedi, Saeid; Navimipour, Nima Jafari
2018-03-01
Quantum-dot Cellular Automata (QCA) has been potentially considered as a supersede to Complementary Metal-Oxide-Semiconductor (CMOS) because of its inherent advantages. Many QCA-based logic circuits with smaller feature size, improved operating frequency, and lower power consumption than CMOS have been offered. This technology works based on electron relations inside quantum-dots. Due to the importance of designing an optimized decoder in any digital circuit, in this paper, we design, implement and simulate a new 2-to-4 decoder based on QCA with low delay, area, and complexity. The logic functionality of the 2-to-4 decoder is verified using the QCADesigner tool. The results have shown that the proposed QCA-based decoder has high performance in terms of a number of cells, covered area, and time delay. Due to the lower clock pulse frequency, the proposed 2-to-4 decoder is helpful for building QCA-based sequential digital circuits with high performance.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
ERIC Educational Resources Information Center
Gates, Louis
2018-01-01
The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization--this generalization unifies…
Oppositional Decoding as an Act of Resistance.
ERIC Educational Resources Information Center
Steiner, Linda
1988-01-01
Argues that contributors to the "No Comment" feature of "Ms." magazine are engaging in oppositional decoding and speculates on why this is a satisfying group process. Also notes such decoding presents another challenge to the idea that mass media has the same effect on all audiences. (SD)
The spatial and temporal organization of ubiquitin networks
Grabbe, Caroline; Husnjak, Koraljka; Dikic, Ivan
2013-01-01
In the past decade, the diversity of signals generated by the ubiquitin system has emerged as a dominant regulator of biological processes and propagation of information in the eukaryotic cell. A wealth of information has been gained about the crucial role of spatial and temporal regulation of ubiquitin species of different lengths and linkages in the nuclear factor-κB (NF-κB) pathway, endocytic trafficking, protein degradation and DNA repair. This spatiotemporal regulation is achieved through sophisticated mechanisms of compartmentalization and sequential series of ubiquitylation events and signal decoding, which control diverse biological processes not only in the cell but also during the development of tissues and entire organisms. PMID:21448225
Code of Federal Regulations, 2014 CFR
2014-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Code of Federal Regulations, 2013 CFR
2013-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Code of Federal Regulations, 2012 CFR
2012-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Hands-On Decoding: Guidelines for Using Manipulative Letters
ERIC Educational Resources Information Center
Pullen, Paige Cullen; Lane, Holly B.
2016-01-01
Manipulative objects have long been an essential tool in the development of mathematics knowledge and skills. A growing body of evidence suggests using manipulative letters for decoding practice is an also an effective method for teaching reading, particularly in improving the phonological and decoding skills of students at risk for reading…
The Contribution of Attentional Control and Working Memory to Reading Comprehension and Decoding
ERIC Educational Resources Information Center
Arrington, C. Nikki; Kulesz, Paulina A.; Francis, David J.; Fletcher, Jack M.; Barnes, Marcia A.
2014-01-01
Little is known about how specific components of working memory, namely, attentional processes including response inhibition, sustained attention, and cognitive inhibition, are related to reading decoding and comprehension. The current study evaluated the relations of reading comprehension, decoding, working memory, and attentional control in…
ERIC Educational Resources Information Center
Gregg, Noel; Hoy, Cheri; Flaherty, Donna Ann; Norris, Peggy; Coleman, Christopher; Davis, Mark; Jordan, Michael
2005-01-01
The vast majority of students with learning disabilities at the postsecondary level demonstrate reading decoding, reading fluency, and writing deficits. Identification of valid and reliable psychometric measures for documenting decoding and spelling disabilities at the postsecondary level is critical for determining appropriate accommodations. The…
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Lin, S.
1984-01-01
Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.
A /31,15/ Reed-Solomon Code for large memory systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.
Information encoder/decoder using chaotic systems
Miller, Samuel Lee; Miller, William Michael; McWhorter, Paul Jackson
1997-01-01
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals.
Information encoder/decoder using chaotic systems
Miller, S.L.; Miller, W.M.; McWhorter, P.J.
1997-10-21
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals. 32 figs.
Node synchronization schemes for the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Swanson, L.; Arnold, S.
1992-01-01
The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.
1988-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe
2017-12-01
The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Khorasani, Abed; Heydari Beni, Nargess; Shalchyan, Vahid; Daliri, Mohammad Reza
2016-10-21
Local field potential (LFP) signals recorded by intracortical microelectrodes implanted in primary motor cortex can be used as a high informative input for decoding of motor functions. Recent studies show that different kinematic parameters such as position and velocity can be inferred from multiple LFP signals as precisely as spiking activities, however, continuous decoding of the force magnitude from the LFP signals in freely moving animals has remained an open problem. Here, we trained three rats to press a force sensor for getting a drop of water as a reward. A 16-channel micro-wire array was implanted in the primary motor cortex of each trained rat, and obtained LFP signals were used for decoding of the continuous values recorded by the force sensor. Average coefficient of correlation and the coefficient of determination between decoded and actual force signals were r = 0.66 and R 2 = 0.42, respectively. We found that LFP signal on gamma frequency bands (30-120 Hz) had the most contribution in the trained decoding model. This study suggests the feasibility of using low number of LFP channels for the continuous force decoding in freely moving animals resembling BMI systems in real life applications.
Electrophysiological difference between mental state decoding and mental state reasoning.
Cao, Bihua; Li, Yiyuan; Li, Fuhong; Li, Hong
2012-06-29
Previous studies have explored the neural mechanism of Theory of Mind (ToM), but the neural correlates of its two components, mental state decoding and mental state reasoning, remain unclear. In the present study, participants were presented with various photographs, showing an actor looking at 1 of 2 objects, either with a happy or an unhappy expression. They were asked to either decode the emotion of the actor (mental state decoding task), predict which object would be chosen by the actor (mental state reasoning task), or judge at which object the actor was gazing (physical task), while scalp potentials were recorded. Results showed that (1) the reasoning task elicited an earlier N2 peak than the decoding task did over the prefrontal scalp sites; and (2) during the late positive component (240-440 ms), the reasoning task elicited a more positive deflection than the other two tasks did at the prefrontal scalp sites. In addition, neither the decoding task nor the reasoning task has no left/right hemisphere difference. These findings imply that mental state reasoning differs from mental state decoding early (210 ms) after stimulus onset, and that the prefrontal lobe is the neural basis of mental state reasoning. Copyright © 2012 Elsevier B.V. All rights reserved.
Reading skills of students with speech sound disorders at three stages of literacy development.
Skebo, Crysten M; Lewis, Barbara A; Freebairn, Lisa A; Tag, Jessica; Avrich Ciesla, Allison; Stein, Catherine M
2013-10-01
The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). In a cross-sectional design, students ages 7;0 (years;months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children's literacy stages.
Reading Skills of Students With Speech Sound Disorders at Three Stages of Literacy Development
Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.
2015-01-01
Purpose The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). Method In a cross-sectional design, students ages 7;0 (years; months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. Results For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Conclusion Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children’s literacy stages. PMID:23833280
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.
Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo
2017-05-01
In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval.
Stefano, George B; Wang, Fuzhou; Kream, Richard M
2018-02-26
Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon "chips" and "cloud" storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA's great potential for large data storage in a 'smaller' space.
Method and apparatus for data decoding and processing
Hunter, Timothy M.; Levy, Arthur J.
1992-01-01
A system and technique is disclosed for automatically controlling the decoding and digitizaiton of an analog tape. The system includes the use of a tape data format which includes a plurality of digital codes recorded on the analog tape in a predetermined proximity to a period of recorded analog data. The codes associated with each period of analog data include digital identification codes prior to the analog data, a start of data code coincident with the analog data recording, and an end of data code subsequent to the associated period of recorded analog data. The formatted tape is decoded in a processing and digitization system which includes an analog tape player coupled to a digitizer to transmit analog information from the recorded tape over at least one channel to the digitizer. At the same time, the tape player is coupled to a decoder and interface system which detects and decodes the digital codes on the tape corresponding to each period of recorded analog data and controls tape movement and digitizer initiation in response to preprogramed modes. A host computer is also coupled to the decoder and interface system and the digitizer and programmed to initiate specific modes of data decoding through the decoder and interface system including the automatic compilation and storage of digital identification information and digitized data for the period of recorded analog data corresponding to the digital identification data, compilation and storage of selected digitized data representing periods of recorded analog data, and compilation of digital identification information related to each of the periods of recorded analog data.
Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V
2017-04-01
Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.
VLSI chip-set for data compression using the Rice algorithm
NASA Technical Reports Server (NTRS)
Venbrux, J.; Liu, N.
1990-01-01
A full custom VLSI implementation of a data compression encoder and decoder which implements the lossless Rice data compression algorithm is discussed in this paper. The encoder and decoder reside on single chips. The data rates are to be 5 and 10 Mega-samples-per-second for the decoder and encoder respectively.
Training Students to Decode Verbal and Nonverbal Cues: Effects on Confidence and Performance.
ERIC Educational Resources Information Center
Costanzo, Mark
1992-01-01
A study conducted with 105 university students investigated the effectiveness of using previous research findings as a means of teaching students how to interpret verbal and nonverbal behavior (decoding). Practice may be the critical feature for training in decoding. Research findings were successfully converted into educational techniques. (SLD)
Communication Encoding and Decoding in Children from Different Socioeconomic and Racial Groups.
ERIC Educational Resources Information Center
Quay, Lorene C.; And Others
Although lower socioeconomic status (SES) black children have been shown to be inferior to middle-SES white children in communication accuracy, whether the problem is in encoding (production), decoding (comprehension), or both is not clear. To evaluate encoding and decoding separately, tape recordings of picture descriptions were obtained from…
The Impact of Nonverbal Communication in Organizations: A Survey of Perceptions.
ERIC Educational Resources Information Center
Graham, Gerald H.; And Others
1991-01-01
Discusses a survey of 505 respondents from business organizations. Reports that self-described good decoders of nonverbal communication consider nonverbal communication more important than do other decoders. Notes that both men and women perceive women as both better decoders and encoders of nonverbal cues. Recommends paying more attention to…
Does Linguistic Comprehension Support the Decoding Skills of Struggling Readers?
ERIC Educational Resources Information Center
Blick, Michele; Nicholson, Tom; Chapman, James; Berman, Jeanette
2017-01-01
This study investigated the contribution of linguistic comprehension to the decoding skills of struggling readers. Participants were 36 children aged between eight and 12 years, all below average in decoding but differing in linguistic comprehension. The children read passages from the Neale Analysis of Reading Ability and their first 25 miscues…
Role of Gender and Linguistic Diversity in Word Decoding Development
ERIC Educational Resources Information Center
Verhoeven, Ludo; van Leeuwe, Jan
2011-01-01
The purpose of the present study was to investigate the role of gender and linguistic diversity in the growth of Dutch word decoding skills throughout elementary school for a representative sample of children living in the Netherlands. Following a longitudinal design, the children's decoding abilities for (1) regular CVC words, (2) complex…
ERIC Educational Resources Information Center
Padeliadu, Susana; Antoniou, Faye
2014-01-01
Experts widely consider decoding and fluency as the basis of reading comprehension, while at the same time consistently documenting problems in these areas as major characteristics of students with learning disabilities. However, scholars have developed most of the relevant research within phonologically deep languages, wherein decoding problems…
Cognitive Training and Reading Remediation
ERIC Educational Resources Information Center
Mahapatra, Shamita
2015-01-01
Reading difficulties are experienced by children either because they fail to decode the words and thus are unable to comprehend the text or simply fail to comprehend the text even if they are able to decode the words and read them out. Failure in word decoding results from a failure in phonological coding of written information, whereas, reading…
Validation of the Informal Decoding Inventory
ERIC Educational Resources Information Center
McKenna, Michael C.; Walpole, Sharon; Jang, Bong Gee
2017-01-01
This study investigated the reliability and validity of Part 1 of the Informal Decoding Inventory (IDI), a free diagnostic assessment used to plan Tier 2 intervention for first graders with decoding deficits. Part 1 addresses single-syllable words and consists of five subtests that progress in difficulty and that contain real word and pseudoword…
ERIC Educational Resources Information Center
Tingerthal, John Steven
2013-01-01
Using case study methodology and autoethnographic methods, this study examines a process of curricular development known as "Decoding the Disciplines" (Decoding) by documenting the experience of its application in a construction engineering mechanics course. Motivated by the call to integrate what is known about teaching and learning…
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Multidimensional biochemical information processing of dynamical patterns
NASA Astrophysics Data System (ADS)
Hasegawa, Yoshihiko
2018-02-01
Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.
Comparison of incoming dental school patients with and without disabilities.
Stiefel, D J; Truelove, E L; Martin, M D; Mandel, L S
1997-01-01
A survey of incoming dental school patients compared 64 adult patients (DECOD) and 73 patients without disability (ND), regarding past dental experience, current needs, and basis for selecting the school's clinics. The responses indicated that, for DECOD patients, clinic selection was based largely on Medicaid acceptance, staff experience, and inability of other dentists to manage their disability; for ND patients, selection was based on lower fee structure. Both groups expressed high treatment need, but the rate was lower for DECOD than for ND patients. More DECOD patients reported severe dental anxiety and adverse effects of dental problems on general health. Chart records revealed that clinical findings exceeded perceived need for both DECOD and ND patients. While both groups had high periodontal disease rates (91%), DECOD patients had significantly poorer oral hygiene and less restorative need than ND patients. The findings suggest differences between persons with disabilities and other patient groups in difficulty of access to dental services in the community, reasons for entering the dental school system, and in presenting treatment need and/or treatment planning.
Word and Person Effects on Decoding Accuracy: A New Look at an Old Question
Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.
2011-01-01
The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was comprised of 196 English-speaking grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure. Because grapheme-phoneme correspondence (GPC) knowledge was treated as person and word specific, we are able to conclude that it is neither necessary nor sufficient for a student to know all GPCs in a word before accurately decoding the word. And controlling for word-specific GPC knowledge, students with lower phonemic awareness and slower rapid naming skill have lower predicted probabilities of correct decoding than counterparts with superior skills. By assessing a person-by-word interaction, we found that students with lower phonemic awareness have more difficulty applying knowledge of complex vowel graphemes compared to complex consonant graphemes when decoding unfamiliar words. Implications of the methodology and results are discussed in light of future research. PMID:21743750
Longitudinal Stability and Predictors of Poor Oral Comprehenders and Poor Decoders
Elwér, Åsa; Keenan, Janice M.; Olson, Richard K.; Byrne, Brian; Samuelsson, Stefan
2012-01-01
Two groups of 4th grade children were selected from a population sample (N= 926) to either be Poor Oral Comprehenders (poor oral comprehension but normal word decoding), or Poor Decoders (poor decoding but normal oral comprehension). By examining both groups in the same study with varied cognitive and literacy predictors, and examining them both retrospectively and prospectively, we could assess how distinctive and stable the predictors of each deficit are. Predictors were assessed retrospectively at preschool, at the end of kindergarten, 1st, and 2nd grades. Group effects were significant at all test occasions, including those for preschool vocabulary (worse in poor oral comprehenders) and rapid naming (RAN) (worse in poor decoders). Preschool RAN and Vocabulary prospectively predicted grade 4 group membership (77–79% correct classification) within the selected samples. Reselection in preschool of at-risk poor decoder and poor oral comprehender subgroups based on these variables led to significant but relatively weak prediction of subtype membership at grade 4. Implications of the predictive stability of our results for identification and intervention of these important subgroups are discussed. PMID:23528975
Multidimensional biochemical information processing of dynamical patterns.
Hasegawa, Yoshihiko
2018-02-01
Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.
Robust pattern decoding in shape-coded structured light
NASA Astrophysics Data System (ADS)
Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai
2017-09-01
Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.
Zhang, Wuhong; Chen, Lixiang
2016-06-15
Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.
Orientation decoding depends on maps, not columns
Freeman, Jeremy; Brouwer, Gijs Joost; Heeger, David J.; Merriam, Elisha P.
2011-01-01
The representation of orientation in primary visual cortex (V1) has been examined at a fine spatial scale corresponding to the columnar architecture. We present functional magnetic resonance imaging (fMRI) measurements providing evidence for a topographic map of orientation preference in human V1 at a much coarser scale, in register with the angular-position component of the retinotopic map of V1. This coarse-scale orientation map provides a parsimonious explanation for why multivariate pattern analysis methods succeed in decoding stimulus orientation from fMRI measurements, challenging the widely-held assumption that decoding results reflect sampling of spatial irregularities in the fine-scale columnar architecture. Decoding stimulus attributes and cognitive states from fMRI measurements has proven useful for a number of applications, but our results demonstrate that the interpretation cannot assume decoding reflects or exploits columnar organization. PMID:21451017
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
An embedded controller for a 7-degree of freedom prosthetic arm.
Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin
2008-01-01
We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro
2017-01-01
Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.
Couvillon, Margaret J; Riddell Pearce, Fiona C; Harris-Jones, Elisabeth L; Kuepfer, Amanda M; Mackenzie-Smith, Samantha J; Rozario, Laura A; Schürch, Roger; Ratnieks, Francis L W
2012-05-15
Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances.
Couvillon, Margaret J.; Riddell Pearce, Fiona C.; Harris-Jones, Elisabeth L.; Kuepfer, Amanda M.; Mackenzie-Smith, Samantha J.; Rozario, Laura A.; Schürch, Roger; Ratnieks, Francis L. W.
2012-01-01
Summary Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances. PMID:23213438
Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems
Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.
2011-01-01
The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582
A Longitudinal Analysis of English Language Learners' Word Decoding and Reading Comprehension
ERIC Educational Resources Information Center
Nakamoto, Jonathan; Lindsey, Kim A.; Manis, Franklin R.
2007-01-01
This longitudinal investigation examined word decoding and reading comprehension measures from first grade through sixth grade for a sample of Spanish-speaking English language learners (ELLs). The sample included 261 children (average age of 7.2 years; 120 boys; 141 girls) at the initial data collection in first grade. The ELLs' word decoding and…
Influence of First Language Orthographic Experience on Second Language Decoding and Word Learning
ERIC Educational Resources Information Center
Hamada, Megumi; Koda, Keiko
2008-01-01
This study examined the influence of first language (L1) orthographic experiences on decoding and semantic information retention of new words in a second language (L2). Hypotheses were that congruity in L1 and L2 orthographic experiences determines L2 decoding efficiency, which, in turn, affects semantic information encoding and retention.…
The Role of Phonological Decoding in Second Language Word-Meaning Inference
ERIC Educational Resources Information Center
Hamada, Megumi; Koda, Keiko
2010-01-01
Two hypotheses were tested: Similarity between first language (L1) and second language (L2) orthographic processing facilitates L2-decoding efficiency; and L2-decoding efficiency contributes to word-meaning inference to different degrees among L2 learners with diverse L1 orthographic backgrounds. The participants were college-level English as a…
ERIC Educational Resources Information Center
Soltani, Amanallah; Roslan, Samsilah
2013-01-01
Reading decoding ability is a fundamental skill to acquire word-specific orthographic information necessary for skilled reading. Decoding ability and its underlying phonological processing skills have been heavily investigated typically among developing students. However, the issue has rarely been noticed among students with intellectual…
Decoding Information in the Human Hippocampus: A User's Guide
ERIC Educational Resources Information Center
Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.
2012-01-01
Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…
ERIC Educational Resources Information Center
Atkinson, Michael L.; Allen, Vernon L.
This experiment was designed to investigate the generality-specificity of the accuracy of both encoders and decoders across different types of nonverbal behavior. It was expected that encoders and decoders would exhibit generality in their behavior--i.e., the same level of accuracy--on the dimension of behavior content…
ERIC Educational Resources Information Center
Pritchard, Stephen C.; Coltheart, Max; Marinus, Eva; Castles, Anne
2016-01-01
Phonological decoding is central to learning to read, and deficits in its acquisition have been linked to reading disorders such as dyslexia. Understanding how this skill is acquired is therefore important for characterising reading difficulties. Decoding can be taught explicitly, or implicitly learned during instruction on whole word spellings…
ERIC Educational Resources Information Center
Hamilton, Stephen; Freed, Erin; Long, Debra L.
2016-01-01
The aim of this study was to examine predictions derived from a proposal about the relation between word-decoding skill and working memory capacity, called verbal efficiency theory. The theory states that poor word representations and slow decoding processes consume resources in working memory that would otherwise be used to execute high-level…
ERIC Educational Resources Information Center
Taylor, Maravene Beth
The author reviews literature on fluency of decoding, sentence awareness or comprehension, and comprehension of larger than sentence texts, in relation to reading comprehension problems in learning disabled children. Initial sections look at the relation of decoding and fluency skills to skilled reading and differences between good and poor…
ERIC Educational Resources Information Center
Matthews, Allison Jane; Martin, Frances Heritage
2009-01-01
Previous research suggests a relationship between spatial attention and phonological decoding in developmental dyslexia. The aim of this study was to examine differences between good and poor phonological decoders in the allocation of spatial attention to global and local levels of hierarchical stimuli. A further aim was to investigate the…
LDPC Codes--Structural Analysis and Decoding Techniques
ERIC Educational Resources Information Center
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
IQ Predicts Word Decoding Skills in Populations with Intellectual Disabilities
ERIC Educational Resources Information Center
Levy, Yonata
2011-01-01
This is a study of word decoding in adolescents with Down syndrome and in adolescents with Intellectual Deficits of unknown etiology. It was designed as a replication of studies of word decoding in English speaking and in Hebrew speaking adolescents with Williams syndrome ([0230] and [0235]). Participants' IQ was matched to IQ in the groups with…
Early Word Decoding Ability as a Longitudinal Predictor of Academic Performance
ERIC Educational Resources Information Center
Nordström, Thomas; Jacobson, Christer; Söderberg, Pernilla
2016-01-01
This study, using a longitudinal design with a Swedish cohort of young readers, investigates if children's early word decoding ability in second grade can predict later academic performance. In an effort to estimate the unique effect of early word decoding (grade 2) with academic performance (grade 9), gender and non-verbal cognitive ability were…
Reading Disabilities and PASS Reading Enhancement Programme
ERIC Educational Resources Information Center
Mahapatra, Shamita
2016-01-01
Children experience difficulties in reading either because they fail to decode the words and thus are unable to comprehend the text or simply fail to comprehend the text even if they are able to decode the words and read them out. Failure in word decoding results from a failure in phonological coding of written information, whereas reading…
ERIC Educational Resources Information Center
Ayala, Sandra M.
2010-01-01
Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum…
The Three Stages of Coding and Decoding in Listening Courses of College Japanese Specialty
ERIC Educational Resources Information Center
Yang, Fang
2008-01-01
The main focus of research papers on listening teaching published in recent years is the theoretical meanings of decoding on the training of listening comprehension ability. Although in many research papers the bottom-up approach and top-down approach, information processing mode theory, are applied to illustrate decoding and to emphasize the…
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2016-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2017-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896
Reading abilities in school-aged preterm children: a review and meta-analysis
Kovachy, Vanessa N; Adams, Jenna N; Tamaresis, John S; Feldman, Heidi M
2014-01-01
AIM Children born preterm (at ≤32wk) are at risk of developing deficits in reading ability. This meta-analysis aims to determine whether or not school-aged preterm children perform worse than those born at term in single-word reading (decoding) and reading comprehension. METHOD Electronic databases were searched for studies published between 2000 and 2013, which assessed decoding or reading comprehension performance in English-speaking preterm and term-born children aged between 6 years and 13 years, and born after 1990. Standardized mean differences in decoding and reading comprehension scores were calculated. RESULTS Nine studies were suitable for analysis of decoding, and five for analysis of reading comprehension. Random-effects meta-analyses showed that children born preterm had significantly lower scores (reported as Cohen’s d values [d] with 95% confidence intervals [CIs]) than those born at term for decoding (d=−0.42, 95% CI −0.57 to −0.27, p<0.001) and reading comprehension (d=−0.57, 95% CI −0.68 to −0.46, p<0.001). Meta-regressions showed that lower gestational age was associated with larger differences in decoding (Q[1]=5.92, p=0.02) and reading comprehension (Q[1]=4.69, p=0.03) between preterm and term groups. Differences between groups increased with age for reading comprehension (Q[1]=5.10, p=0.02) and, although not significant, there was also a trend for increased group differences for decoding (Q[1]=3.44, p=0.06). INTERPRETATION Preterm children perform worse than peers born at term on decoding and reading comprehension. These findings suggest that preterm children should receive more ongoing monitoring for reading difficulties throughout their education. PMID:25516105
Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices.
Schaffelhofer, Stefan; Agudelo-Toro, Andres; Scherberger, Hansjörg
2015-01-21
Despite recent advances in decoding cortical activity for motor control, the development of hand prosthetics remains a major challenge. To reduce the complexity of such applications, higher cortical areas that also represent motor plans rather than just the individual movements might be advantageous. We investigated the decoding of many grip types using spiking activity from the anterior intraparietal (AIP), ventral premotor (F5), and primary motor (M1) cortices. Two rhesus monkeys were trained to grasp 50 objects in a delayed task while hand kinematics and spiking activity from six implanted electrode arrays (total of 192 electrodes) were recorded. Offline, we determined 20 grip types from the kinematic data and decoded these hand configurations and the grasped objects with a simple Bayesian classifier. When decoding from AIP, F5, and M1 combined, the mean accuracy was 50% (using planning activity) and 62% (during motor execution) for predicting the 50 objects (chance level, 2%) and substantially larger when predicting the 20 grip types (planning, 74%; execution, 86%; chance level, 5%). When decoding from individual arrays, objects and grip types could be predicted well during movement planning from AIP (medial array) and F5 (lateral array), whereas M1 predictions were poor. In contrast, predictions during movement execution were best from M1, whereas F5 performed only slightly worse. These results demonstrate for the first time that a large number of grip types can be decoded from higher cortical areas during movement preparation and execution, which could be relevant for future neuroprosthetic devices that decode motor plans. Copyright © 2015 the authors 0270-6474/15/351068-14$15.00/0.
NASA Astrophysics Data System (ADS)
Stavisky, Sergey D.; Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.
2015-06-01
Objective. Brain-machine interfaces (BMIs) seek to enable people with movement disabilities to directly control prosthetic systems with their neural activity. Current high performance BMIs are driven by action potentials (spikes), but access to this signal often diminishes as sensors degrade over time. Decoding local field potentials (LFPs) as an alternative or complementary BMI control signal may improve performance when there is a paucity of spike signals. To date only a small handful of LFP decoding methods have been tested online; there remains a need to test different LFP decoding approaches and improve LFP-driven performance. There has also not been a reported demonstration of a hybrid BMI that decodes kinematics from both LFP and spikes. Here we first evaluate a BMI driven by the local motor potential (LMP), a low-pass filtered time-domain LFP amplitude feature. We then combine decoding of both LMP and spikes to implement a hybrid BMI. Approach. Spikes and LFP were recorded from two macaques implanted with multielectrode arrays in primary and premotor cortex while they performed a reaching task. We then evaluated closed-loop BMI control using biomimetic decoders driven by LMP, spikes, or both signals together. Main results. LMP decoding enabled quick and accurate cursor control which surpassed previously reported LFP BMI performance. Hybrid decoding of both spikes and LMP improved performance when spikes signal quality was mediocre to poor. Significance. These findings show that LMP is an effective BMI control signal which requires minimal power to extract and can substitute for or augment impoverished spikes signals. Use of this signal may lengthen the useful lifespan of BMIs and is therefore an important step towards clinically viable BMIs.
Serial turbo trellis coded modulation using a serially concatenated coder
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)
2010-01-01
Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Identifying musical pieces from fMRI data using encoding and decoding models.
Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge
2018-02-02
Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Decoding grating orientation from microelectrode array recordings in monkey cortical area V4.
Manyakov, Nikolay V; Van Hulle, Marc M
2010-04-01
We propose an invasive brain-machine interface (BMI) that decodes the orientation of a visual grating from spike train recordings made with a 96 microelectrodes array chronically implanted into the prelunate gyrus (area V4) of a rhesus monkey. The orientation is decoded irrespective of the grating's spatial frequency. Since pyramidal cells are less prominent in visual areas, compared to (pre)motor areas, the recordings contain spikes with smaller amplitudes, compared to the noise level. Hence, rather than performing spike decoding, feature selection algorithms are applied to extract the required information for the decoder. Two types of feature selection procedures are compared, filter and wrapper. The wrapper is combined with a linear discriminant analysis classifier, and the filter is followed by a radial-basis function support vector machine classifier. In addition, since we have a multiclass classification problen, different methods for combining pairwise classifiers are compared.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C
2014-07-01
Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles.
Proteomics to study DNA-bound and chromatin-associated gene regulatory complexes
Wierer, Michael; Mann, Matthias
2016-01-01
High-resolution mass spectrometry (MS)-based proteomics is a powerful method for the identification of soluble protein complexes and large-scale affinity purification screens can decode entire protein interaction networks. In contrast, protein complexes residing on chromatin have been much more challenging, because they are difficult to purify and often of very low abundance. However, this is changing due to recent methodological and technological advances in proteomics. Proteins interacting with chromatin marks can directly be identified by pulldowns with synthesized histone tails containing posttranslational modifications (PTMs). Similarly, pulldowns with DNA baits harbouring single nucleotide polymorphisms or DNA modifications reveal the impact of those DNA alterations on the recruitment of transcription factors. Accurate quantitation – either isotope-based or label free – unambiguously pinpoints proteins that are significantly enriched over control pulldowns. In addition, protocols that combine classical chromatin immunoprecipitation (ChIP) methods with mass spectrometry (ChIP-MS) target gene regulatory complexes in their in-vivo context. Similar to classical ChIP, cells are crosslinked with formaldehyde and chromatin sheared by sonication or nuclease digested. ChIP-MS baits can be proteins in tagged or endogenous form, histone PTMs, or lncRNAs. Locus-specific ChIP-MS methods would allow direct purification of a single genomic locus and the proteins associated with it. There, loci can be targeted either by artificial DNA-binding sites and corresponding binding proteins or via proteins with sequence specificity such as TAL or nuclease deficient Cas9 in combination with a specific guide RNA. We predict that advances in MS technology will soon make such approaches generally applicable tools in epigenetics. PMID:27402878
Rathi, Preeti; Maurer, Sara; Summerer, Daniel
2018-06-05
The epigenetic DNA nucleobases 5-methylcytosine (5mC) and N 4-methylcytosine (4mC) coexist in bacterial genomes and have important functions in host defence and transcription regulation. To better understand the individual biological roles of both methylated nucleobases, analytical strategies for distinguishing unmodified cytosine (C) from 4mC and 5mC are required. Transcription-activator-like effectors (TALEs) are programmable DNA-binding repeat proteins, which can be re-engineered for the direct detection of epigenetic nucleobases in user-defined DNA sequences. We here report the natural, cytosine-binding TALE repeat to not strongly differentiate between 5mC and 4mC. To engineer repeats with selectivity in the context of C, 5mC and 4mC, we developed a homogeneous fluorescence assay and screened a library of size-reduced TALE repeats for binding to all three nucleobases. This provided insights into the requirements of size-reduced TALE repeats for 4mC binding and revealed a single mutant repeat as a selective binder of 4mC. Employment of a TALE with this repeat in affinity enrichment enabled the isolation of a user-defined DNA sequence containing a single 4mC but not C or 5mC from the background of a bacterial genome. Comparative enrichments with TALEs bearing this or the natural C-binding repeat provides an approach for the complete, programmable decoding of all cytosine nucleobases found in bacterial genomes.This article is part of a discussion meeting issue 'Frontiers in epigenetic chemical biology'. © 2018 The Author(s).
Phobos lander coding system: Software and analysis
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.
1988-01-01
The software developed for the decoding system used in the telemetry link of the Phobos Lander mission is described. Encoders and decoders are provided to cover the three possible telemetry configurations. The software can be used to decode actual data or to simulate the performance of the telemetry system. The theoretical properties of the codes chosen for this mission are analyzed and discussed.
ERIC Educational Resources Information Center
García, J. Ricardo; Cain, Kate
2014-01-01
The twofold purpose of this meta-analysis was to determine the relative importance of decoding skills to reading comprehension in reading development and to identify which reader characteristics and reading assessment characteristics contribute to differences in the decoding and reading comprehension correlation. A meta-analysis of 110 studies…
Linguistic Effects on Children's Encoding and Decoding Performance in Japan and the United States.
ERIC Educational Resources Information Center
Foorman, Barbara R.; Kinoshita, Yoshiko
The role of linguistic structure in a referential communication task was examined by comparing encoding and decoding performance of 80 five- and seven-year-old children from Japan and the United States. The linguist structure demanded by the task was the simultaneous encoding and decoding of attributes of size, color, pattern, and shape. (In…
ERIC Educational Resources Information Center
Sherman, Renee Z.; Sherman, Joel D.
This market research report analyzed the published literature, the size of the deaf/severely hard-of-hearing population, factors that affect demand for closed-captioned television decoders, and the supply of decoders. The analysis found that the number of hearing-impaired people in the United States is between 16 and 21 million; hearing impairment…
Scan Line Difference Compression Algorithm Simulation Study.
1985-08-01
introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The
ERIC Educational Resources Information Center
Morlini, Isabella; Stella, Giacomo; Scorza, Maristella
2015-01-01
Tools for assessing decoding skill in students attending elementary grades are of fundamental importance for guaranteeing an early identification of reading disabled students and reducing both the primary negative effects (on learning) and the secondary negative effects (on the development of the personality) of this disability. This article…
Does Knowing What a Word Means Influence How Easily Its Decoding Is Learned?
ERIC Educational Resources Information Center
Michaud, Mélissa; Dion, Eric; Barrette, Anne; Dupéré, Véronique; Toste, Jessica
2017-01-01
Theoretical models of word recognition suggest that knowing what a word means makes it easier to learn how to decode it. We tested this hypothesis with at-risk young students, a group that often responds poorly to conventional decoding instruction in which word meaning is not addressed systematically. A total of 53 first graders received explicit…
ERIC Educational Resources Information Center
Lindner, Jennifer L.; Rosen, Lee A.
2006-01-01
This study examined differences in the ability to decode emotion through facial expression, prosody, and verbal content between 14 children with Asperger's Syndrome (AS) and 16 typically developing peers. The ability to decode emotion was measured by the Perception of Emotion Test (POET), which portrayed the emotions of happy, angry, sad, and…
ERIC Educational Resources Information Center
Gallistel, Elizabeth; Fischer, Phyllis
This study evaluated the decoding skills acquired by low readers in an experimental project that taught low readers in regular class through the use of clinical procedures based on a synthetic phonic, multisensory approach. An evaluation instrument which permitted the tabulation of specific decoding skills was administered as a pretest and…
A single chip VLSI Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.
1986-01-01
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
The fast decoding of Reed-Solomon codes using number theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Welch, L. R.; Truong, T. K.
1976-01-01
It is shown that Reed-Solomon (RS) codes can be encoded and decoded by using a fast Fourier transform (FFT) algorithm over finite fields. The arithmetic utilized to perform these transforms requires only integer additions, circular shifts and a minimum number of integer multiplications. The computing time of this transform encoder-decoder for RS codes is less than the time of the standard method for RS codes. More generally, the field GF(q) is also considered, where q is a prime of the form K x 2 to the nth power + 1 and K and n are integers. GF(q) can be used to decode very long RS codes by an efficient FFT algorithm with an improvement in the number of symbols. It is shown that a radix-8 FFT algorithm over GF(q squared) can be utilized to encode and decode very long RS codes with a large number of symbols. For eight symbols in GF(q squared), this transform over GF(q squared) can be made simpler than any other known number theoretic transform with a similar capability. Of special interest is the decoding of a 16-tuple RS code with four errors.
A four-dimensional virtual hand brain-machine interface using active dimension selection.
Rouse, Adam G
2016-06-01
Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
NASA Astrophysics Data System (ADS)
Spüler, M.; Walter, A.; Ramos-Murguialday, A.; Naros, G.; Birbaumer, N.; Gharabaghi, A.; Rosenstiel, W.; Bogdan, M.
2014-12-01
Objective. Recently, there have been several approaches to utilize a brain-computer interface (BCI) for rehabilitation with stroke patients or as an assistive device for the paralyzed. In this study we investigated whether up to seven different hand movement intentions can be decoded from epidural electrocorticography (ECoG) in chronic stroke patients. Approach. In a screening session we recorded epidural ECoG data over the ipsilesional motor cortex from four chronic stroke patients who had no residual hand movement. Data was analyzed offline using a support vector machine (SVM) to decode different movement intentions. Main results. We showed that up to seven hand movement intentions can be decoded with an average accuracy of 61% (chance level 15.6%). When reducing the number of classes, average accuracies up to 88% can be achieved for decoding three different movement intentions. Significance. The findings suggest that ipsilesional epidural ECoG can be used as a viable control signal for BCI-driven neuroprosthesis. Although patients showed no sign of residual hand movement, brain activity at the ipsilesional motor cortex still shows enough intention-related activity to decode different movement intentions with sufficient accuracy.
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Du, Jing; Wang, Jian
2015-11-01
Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida
2016-01-01
Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637
Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.
Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir
2017-08-01
Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model
ERIC Educational Resources Information Center
Sadoski, Mark; McTigue, Erin M.; Paivio, Allan
2012-01-01
In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…
ERIC Educational Resources Information Center
Allen, Vernon L.; Brideau, Linda B.
The relationship between the encoder and the decoder in the communication of nonverbal behavior provides the basis for the two studies described in this report. The first study investigated the ability of parents to decode the nonverbal behavior of their own and other children. Parents were asked to identify children's mode of encoding (natural or…
Method of Error Floor Mitigation in Low-Density Parity-Check Codes
NASA Technical Reports Server (NTRS)
Hamkins, Jon (Inventor)
2014-01-01
A digital communication decoding method for low-density parity-check coded messages. The decoding method decodes the low-density parity-check coded messages within a bipartite graph having check nodes and variable nodes. Messages from check nodes are partially hard limited, so that every message which would otherwise have a magnitude at or above a certain level is re-assigned to a maximum magnitude.
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Adaptive neuron-to-EMG decoder training for FES neuroprostheses
NASA Astrophysics Data System (ADS)
Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.
2016-08-01
Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by including data from multiple grasping tasks in the training of the neuron-to-EMG decoder. Our approach would make it possible for persons with SCI to grasp objects with their own hands, using near-normal motor intent.
Patael, Smadar Z.; Farris, Emily A.; Black, Jessica M.; Hancock, Roeland; Gabrieli, John D. E.; Cutting, Laurie E.; Hoeft, Fumiko
2018-01-01
Objective The ultimate goal of reading is to understand written text. To accomplish this, children must first master decoding, the ability to translate printed words into sounds. Although decoding and reading comprehension are highly interdependent, some children struggle to decode but comprehend well, whereas others with good decoding skills fail to comprehend. The neural basis underlying individual differences in this discrepancy between decoding and comprehension abilities is virtually unknown. Methods We investigated the neural basis underlying reading discrepancy, defined as the difference between reading comprehension and decoding skills, in a three-part study: 1) The neuroanatomical basis of reading discrepancy in a cross-sectional sample of school-age children with a wide range of reading abilities (Experiment-1; n = 55); 2) Whether a discrepancy-related neural signature is present in beginning readers and predictive of future discrepancy (Experiment-2; n = 43); and 3) Whether discrepancy-related regions are part of a domain-general or a language specialized network, utilizing the 1000 Functional Connectome data and large-scale reverse inference from Neurosynth.org (Experiment-3). Results Results converged onto the left dorsolateral prefrontal cortex (DLPFC), as related to having discrepantly higher reading comprehension relative to decoding ability. Increased gray matter volume (GMV) was associated with greater discrepancy (Experiment-1). Region-of-interest (ROI) analyses based on the left DLPFC cluster identified in Experiment-1 revealed that regional GMV within this ROI in beginning readers predicted discrepancy three years later (Experiment-2). This region was associated with the fronto-parietal network that is considered fundamental for working memory and cognitive control (Experiment-3). Interpretation Processes related to the prefrontal cortex might be linked to reading discrepancy. The findings may be important for understanding cognitive resilience, which we operationalize as those individuals with greater higher-order reading skills such as reading comprehension compared to lower-order reading skills such as decoding skills. Our study provides insights into reading development, existing theories of reading, and cognitive processes that are potentially significant to a wide range of reading disorders. PMID:29902208
Doring, Martin
2005-12-01
This article deals with the cultural framing of the near sequencing of the human genome and its impact on the media coverage in Germany. It investigates in particular the way in which the weekly journal Die Zeit and the daily newspaper Frankfurter Rundschau reported this media event and its aftermath between June 2000 and June 2001. Both newspapers are quality papers that played an essential role in framing the human genome debate--alongside the Frankfurter Allgemeine Zeitung--which became the most prominent genomic forum. The decoding of the human genome prompted a huge controversy concerning the ethics of human engineering, research on stem cells and Preimplantation Genetic Diagnosis. The main aim of this article is to show how this controversy was structured by metaphor. The media coverage of the genome generated DNA-factishes--a neologism designating the ambivalence of something as fact (fait) and as a fetish (fetiche)--that mostly propagated images of a new DNA-scienticism or biological determinism. Mediated by cultural experiences, the human genome became a highly artificial and social construct of a 'NatureCulture'.
Evolution of Replication Machines
Yao, Nina Y.; O'Donnell, Mike E.
2016-01-01
The machines that decode and regulate genetic information require the translation, transcription and replication pathways essential to all living cells. Thus, it might be expected that all cells share the same basic machinery for these pathways that were inherited from the primordial ancestor cell from which they evolved. A clear example of this is found in the translation machinery that converts RNA sequence to protein. The translation process requires numerous structural and catalytic RNAs and proteins, the central factors of which are homologous in all three domains of life, bacteria, archaea and eukarya. Likewise, the central actor in transcription, RNA polymerase, shows homology among the catalytic subunits in bacteria, archaea and eukarya. In contrast, while some “gears” of the genome replication machinery are homologous in all domains of life, most components of the replication machine appear to be unrelated between bacteria and those of archaea and eukarya. This review will compare and contrast the central proteins of the “replisome” machines that duplicate DNA in bacteria, archaea and eukarya, with an eye to understanding the issues surrounding the evolution of the DNA replication apparatus. PMID:27160337
Engineering M13 for phage display.
Sidhu, S S
2001-09-01
Phage display is achieved by fusing polypeptide libraries to phage coat proteins. The resulting phage particles display the polypeptides on their surfaces and they also contain the encoding DNA. Library members with particular functions can be isolated with simple selections and polypeptide sequences can be decoded from the encapsulated DNA. The technology's success depends on the efficiency with which polypeptides can be displayed on the phage surface, and significant progress has been made in engineering M13 bacteriophage coat proteins as improved phage display platforms. Functional display has been achieved with all five M13 coat proteins, with both N- and C-terminal fusions. Also, coat protein mutants have been designed and selected to improve the efficiency of heterologous protein display, and in the extreme case, completely artificial coat proteins have been evolved specifically as display platforms. These studies demonstrate that the M13 phage coat is extremely malleable, and this property can be used to engineer the phage particle specifically for phage display. These improvements expand the utility of phage display as a powerful tool in modern biotechnology.
Decoding Ca2+ signals in plants
NASA Technical Reports Server (NTRS)
Sathyanarayanan, P. V.; Poovaiah, B. W.
2004-01-01
Different input signals create their own characteristic Ca2+ fingerprints. These fingerprints are distinguished by frequency, amplitude, duration, and number of Ca2+ oscillations. Ca(2+)-binding proteins and protein kinases decode these complex Ca2+ fingerprints through conformational coupling and covalent modifications of proteins. This decoding of signals can lead to a physiological response with or without changes in gene expression. In plants, Ca(2+)-dependent protein kinases and Ca2+/calmodulin-dependent protein kinases are involved in decoding Ca2+ signals into phosphorylation signals. This review summarizes the elements of conformational coupling and molecular mechanisms of regulation of the two groups of protein kinases by Ca2+ and Ca2+/calmodulin in plants.
Flexible High Speed Codec (FHSC)
NASA Technical Reports Server (NTRS)
Segallis, G. P.; Wernlund, J. V.
1991-01-01
The ongoing NASA/Harris Flexible High Speed Codec (FHSC) program is described. The program objectives are to design and build an encoder decoder that allows operation in either burst or continuous modes at data rates of up to 300 megabits per second. The decoder handles both hard and soft decision decoding and can switch between modes on a burst by burst basis. Bandspreading is low since the code rate is greater than or equal to 7/8. The encoder and a hard decision decoder fit on a single application specific integrated circuit (ASIC) chip. A soft decision applique is implemented using 300 K emitter coupled logic (ECL) which can be easily translated to an ECL gate array.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, H.M.; Reed, I.S.
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less
NASA Astrophysics Data System (ADS)
Bucklin, Ann; Ortman, Brian D.; Jennings, Robert M.; Nigro, Lisa M.; Sweetman, Christopher J.; Copley, Nancy J.; Sutton, Tracey; Wiebe, Peter H.
2010-12-01
Species diversity of the metazoan holozooplankton assemblage of the Sargasso Sea, Northwest Atlantic Ocean, was examined through coordinated morphological taxonomic identification of species and DNA sequencing of a ˜650 base-pair region of mitochondrial cytochrome oxidase I (mtCOI) as a DNA barcode (i.e., short sequence for species recognition and discrimination). Zooplankton collections were made from the surface to 5,000 meters during April, 2006 on the R/V R.H. Brown. Samples were examined by a ship-board team of morphological taxonomists; DNA barcoding was carried out in both ship-board and land-based DNA sequencing laboratories. DNA barcodes were determined for a total of 297 individuals of 175 holozooplankton species in four phyla, including: Cnidaria (Hydromedusae, 4 species; Siphonophora, 47); Arthropoda (Amphipoda, 10; Copepoda, 34; Decapoda, 9; Euphausiacea, 10; Mysidacea, 1; Ostracoda, 27); and Mollusca (Cephalopoda, 8; Heteropoda, 6; Pteropoda, 15); and Chaetognatha (4). Thirty species of fish (Teleostei) were also barcoded. For all seven zooplankton groups for which sufficient data were available, Kimura-2-Parameter genetic distances were significantly lower between individuals of the same species (mean=0.0114; S.D. 0.0117) than between individuals of different species within the same group (mean=0.3166; S.D. 0.0378). This difference, known as the barcode gap, ensures that mtCOI sequences are reliable characters for species identification for the oceanic holozooplankton assemblage. In addition, DNA barcodes allow recognition of new or undescribed species, reveal cryptic species within known taxa, and inform phylogeographic and population genetic studies of geographic variation. The growing database of "gold standard" DNA barcodes serves as a Rosetta Stone for marine zooplankton, providing the key for decoding species diversity by linking species names, morphology, and DNA sequence variation. In light of the pivotal position of zooplankton in ocean food webs, their usefulness as rapid responders to environmental change, and the increasing scarcity of taxonomists, the use of DNA barcodes is an important and useful approach for rapid analysis of species diversity and distribution in the pelagic community.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
The decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.
1988-01-01
Reed-Solomon (RS) codes form an important part of the high-rate downlink telemetry system for the Magellan mission, and the RS decoding function for this project will be done by DSN. Although the basic idea behind all Reed-Solomon decoding algorithms was developed by Berlekamp in 1968, there are dozens of variants of Berlekamp's algorithm in current use. An attempt to restore order is made by presenting a mathematical theory which explains the working of almost all known RS decoding algorithms. The key innovation that makes this possible is the unified approach to the solution of the key equation, which simultaneously describes the Berlekamp, Berlekamp-Massey, Euclid, and continued fractions approaches. Additionally, a detailed analysis is made of what can happen to a generic RS decoding algorithm when the number of errors and erasures exceeds the code's designed correction capability, and it is shown that while most published algorithms do not detect as many of these error-erasure patterns as possible, by making a small change in the algorithms, this problem can be overcome.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHERTKOV, MICHAEL; STEPANOV, MIKHAIL
2007-01-10
The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less
Feature reconstruction of LFP signals based on PLSR in the neural information decoding study.
Yonghui Dong; Zhigang Shang; Mengmeng Li; Xinyu Liu; Hong Wan
2017-07-01
To solve the problems of Signal-to-Noise Ratio (SNR) and multicollinearity when the Local Field Potential (LFP) signals is used for the decoding of animal motion intention, a feature reconstruction of LFP signals based on partial least squares regression (PLSR) in the neural information decoding study is proposed in this paper. Firstly, the feature information of LFP coding band is extracted based on wavelet transform. Then the PLSR model is constructed by the extracted LFP coding features. According to the multicollinearity characteristics among the coding features, several latent variables which contribute greatly to the steering behavior are obtained, and the new LFP coding features are reconstructed. Finally, the K-Nearest Neighbor (KNN) method is used to classify the reconstructed coding features to verify the decoding performance. The results show that the proposed method can achieve the highest accuracy compared to the other three methods and the decoding effect of the proposed method is robust.
Spiking Neural Network Decoder for Brain-Machine Interfaces.
Dethier, Julie; Gilja, Vikash; Nuyujukian, Paul; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena
2011-01-01
We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo , a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
Matthews, Allison Jane; Martin, Frances Heritage
2015-12-01
To investigate facilitatory and inhibitory processes during selective attention among adults with good (n=17) and poor (n=14) phonological decoding skills, a go/nogo flanker task was completed while EEG was recorded. Participants responded to a middle target letter flanked by compatible or incompatible flankers. The target was surrounded by a small or large circular cue which was presented simultaneously or 500ms prior. Poor decoders showed a greater RT cost for incompatible stimuli preceded by large cues and less RT benefit for compatible stimuli. Poor decoders also showed reduced modulation of ERPs by cue-size at left hemisphere posterior sites (N1) and by flanker compatibility at right hemisphere posterior sites (N1) and frontal sites (N2), consistent with processing differences in fronto-parietal attention networks. These findings have potential implications for understanding the relationship between spatial attention and phonological decoding in dyslexia. Copyright © 2015 Elsevier Inc. All rights reserved.
Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C
2014-01-01
Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles. DOI: http://dx.doi.org/10.7554/eLife.02670.001 PMID:24986734
16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.
Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S
2012-05-21
We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.
Individually Watermarked Information Distributed Scalable by Modified Transforms
2009-10-01
inverse of the secret transform is needed. Each trusted recipient has a unique inverse transform that is similar to the inverse of the original...transform. The elements of this individual inverse transform are given by the individual descrambling key. After applying the individual inverse ... transform the retrieved image is embedded with a recipient individual watermark. Souce 1 I Decode IW1 Decode IW2 Decode ISC Scramb K Recipient 3
A four-dimensional virtual hand brain-machine interface using active dimension selection
NASA Astrophysics Data System (ADS)
Rouse, Adam G.
2016-06-01
Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
2017-01-01
Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs) for robust movement decoding of Parkinson's disease (PD) and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value) at about 0.729 ± 0.16 for decoding movement from the resting state and about 0.671 ± 0.14 for decoding left and right visually cued movements. PMID:29201041
Maurage, Pierre; Campanella, Salvatore; Philippot, Pierre; Charest, Ian; Martin, Sophie; de Timary, Philippe
2009-01-01
Emotional facial expression (EFE) decoding impairment has been repeatedly reported in alcoholism (e.g. Philippot et al., 1999). Nevertheless, several questions are still under debate concerning this alteration, notably its generalization to other emotional stimuli and its variation according to the emotional valence of stimuli. Eighteen recently detoxified alcoholic subjects and 18 matched controls performed a decoding test consisting in emotional intensity ratings on various stimuli (faces, voices, body postures and written scenarios) depicting different emotions (anger, fear, happiness, neutral, sadness). Perceived threat and difficulty were also assessed for each stimulus. Alcoholic individuals had a preserved decoding performance for happiness stimuli, but alcoholism was associated with an underestimation of sadness and fear, and with a general overestimation of anger. More importantly, these decoding impairments were observed for faces, voices and postures but not for written scenarios. We observed for the first time a generalized emotional decoding impairment in alcoholism, as this impairment is present not only for faces but also for other visual (i.e. body postures) and auditory stimuli. Moreover, we report that this alteration (1) is mainly indexed by an overestimation of anger and (2) cannot be explained by an 'affect labelling' impairment, as the semantic comprehension of written emotional scenarios is preserved. Fundamental and clinical implications are discussed.
A four-dimensional virtual hand brain-machine interface using active dimension selection
Rouse, Adam G.
2018-01-01
Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896
Hybrid EEG-fNIRS-Based Eight-Command Decoding for BCI: Application to Quadcopter Control.
Khan, Muhammad Jawad; Hong, Keum-Shik
2017-01-01
In this paper, a hybrid electroencephalography-functional near-infrared spectroscopy (EEG-fNIRS) scheme to decode eight active brain commands from the frontal brain region for brain-computer interface is presented. A total of eight commands are decoded by fNIRS, as positioned on the prefrontal cortex, and by EEG, around the frontal, parietal, and visual cortices. Mental arithmetic, mental counting, mental rotation, and word formation tasks are decoded with fNIRS, in which the selected features for classification and command generation are the peak, minimum, and mean ΔHbO values within a 2-s moving window. In the case of EEG, two eyeblinks, three eyeblinks, and eye movement in the up/down and left/right directions are used for four-command generation. The features in this case are the number of peaks and the mean of the EEG signal during 1 s window. We tested the generated commands on a quadcopter in an open space. An average accuracy of 75.6% was achieved with fNIRS for four-command decoding and 86% with EEG for another four-command decoding. The testing results show the possibility of controlling a quadcopter online and in real-time using eight commands from the prefrontal and frontal cortices via the proposed hybrid EEG-fNIRS interface.
Performance breakdown in optimal stimulus decoding
NASA Astrophysics Data System (ADS)
Kostal, Lubomir; Lansky, Petr; Pilarski, Stevan
2015-06-01
Objective. One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. Approach. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. Main results. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. Significance. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
NASA Astrophysics Data System (ADS)
Jo, Hyunho; Sim, Donggyu
2014-06-01
We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Geva, Esther; Massey-Garrison, Angela
2013-01-01
The overall objective of this article is to examine how oral language abilities relate to reading profiles in English language learners (ELLs) and English as a first language (EL1) learners, and the extent of similarities and differences between ELLs and EL1s in three reading subgroups: normal readers, poor decoders, and poor comprehenders. The study included 100 ELLs and 50 EL1s in Grade 5. The effect of language group (ELL/EL1) and reading group on cognitive and linguistic skills was examined. Except for vocabulary, there was no language group effect on any measure. However, within ELL and EL1 alike, significant differences were found between reading groups: Normal readers outperformed the two other groups on all the oral language measures. Distinct cognitive and linguistic profiles were associated with poor decoders and poor comprehenders, regardless of language group. The ELL and EL1 poor decoders outperformed the poor comprehenders on listening comprehension and inferencing. The poor decoders displayed phonological-based weaknesses, whereas the poor comprehenders displayed a more generalized language processing weakness that is nonphonological in nature. Regardless of language status, students with poor decoding or comprehension problems display difficulties with various aspects of language.
Corbett, Elaine A; Perreault, Eric J; Körding, Konrad P
2012-06-01
Neuroprosthetic devices promise to allow paralyzed patients to perform the necessary functions of everyday life. However, to allow patients to use such tools it is necessary to decode their intent from neural signals such as electromyograms (EMGs). Because these signals are noisy, state of the art decoders integrate information over time. One systematic way of doing this is by taking into account the natural evolution of the state of the body--by using a so-called trajectory model. Here we use two insights about movements to enhance our trajectory model: (1) at any given time, there is a small set of likely movement targets, potentially identified by gaze; (2) reaches are produced at varying speeds. We decoded natural reaching movements using EMGs of muscles that might be available from an individual with spinal cord injury. Target estimates found from tracking eye movements were incorporated into the trajectory model, while a mixture model accounted for the inherent uncertainty in these estimates. Warping the trajectory model in time using a continuous estimate of the reach speed enabled accurate decoding of faster reaches. We found that the choice of richer trajectory models, such as those incorporating target or speed, improves decoding particularly when there is a small number of EMGs available.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Orientation decoding: Sense in spirals?
Clifford, Colin W G; Mannion, Damien J
2015-04-15
The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense. Copyright © 2014 Elsevier Inc. All rights reserved.
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
Performance breakdown in optimal stimulus decoding.
Lubomir Kostal; Lansky, Petr; Pilarski, Stevan
2015-06-01
One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes
NASA Technical Reports Server (NTRS)
Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.
1997-01-01
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Neuroprosthetic Decoder Training as Imitation Learning.
Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P
2016-05-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex.
Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito
2018-01-01
Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.
Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex
Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito
2018-01-01
Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs. PMID:29674950
Towards a symbiotic brain-computer interface: exploring the application-decoder interaction
NASA Astrophysics Data System (ADS)
Verhoeven, T.; Buteneers Wiersema, P., Jr.; Dambre, J.; Kindermans, PJ
2015-12-01
Objective. State of the art brain-computer interface (BCI) research focuses on improving individual components such as the application or the decoder that converts the user’s brain activity to control signals. In this study, we investigate the interaction between these components in the P300 speller, a BCI for communication. We introduce a synergistic approach in which the stimulus presentation sequence is modified to enhance the machine learning decoding. In this way we aim for an improved overall BCI performance. Approach. First, a new stimulus presentation paradigm is introduced which provides us flexibility in tuning the sequence of visual stimuli presented to the user. Next, an experimental setup in which this paradigm is compared to other paradigms uncovers the underlying mechanism of the interdependence between the application and the performance of the decoder. Main results. Extensive analysis of the experimental results reveals the changing requirements of the decoder concerning the data recorded during the spelling session. When few data is recorded, the balance in the number of target and non-target stimuli shown to the user is more important than the signal-to-noise rate (SNR) of the recorded response signals. Only when more data has been collected, the SNR becomes the dominant factor. Significance. For BCIs in general, knowing the dominant factor that affects the decoder performance and being able to respond to it is of utmost importance to improve system performance. For the P300 speller, the proposed tunable paradigm offers the possibility to tune the application to the decoder’s needs at any time and, as such, fully exploit this application-decoder interaction.
Good trellises for IC implementation of viterbi decoders for linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.
1996-01-01
This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Brain Decoding-Classification of Hand Written Digits from fMRI Data Employing Bayesian Networks
Yargholi, Elahe'; Hossein-Zadeh, Gholam-Ali
2016-01-01
We are frequently exposed to hand written digits 0–9 in today's modern life. Success in decoding-classification of hand written digits helps us understand the corresponding brain mechanisms and processes and assists seriously in designing more efficient brain–computer interfaces. However, all digits belong to the same semantic category and similarity in appearance of hand written digits makes this decoding-classification a challenging problem. In present study, for the first time, augmented naïve Bayes classifier is used for classification of functional Magnetic Resonance Imaging (fMRI) measurements to decode the hand written digits which took advantage of brain connectivity information in decoding-classification. fMRI was recorded from three healthy participants, with an age range of 25–30. Results in different brain lobes (frontal, occipital, parietal, and temporal) show that utilizing connectivity information significantly improves decoding-classification and capability of different brain lobes in decoding-classification of hand written digits were compared to each other. In addition, in each lobe the most contributing areas and brain connectivities were determined and connectivities with short distances between their endpoints were recognized to be more efficient. Moreover, data driven method was applied to investigate the similarity of brain areas in responding to stimuli and this revealed both similarly active areas and active mechanisms during this experiment. Interesting finding was that during the experiment of watching hand written digits, there were some active networks (visual, working memory, motor, and language processing), but the most relevant one to the task was language processing network according to the voxel selection. PMID:27468261
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task
NASA Astrophysics Data System (ADS)
Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.
2014-12-01
Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.
Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A
2014-12-01
To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Yoles-Frenkel, Michal; Kahan, Anat; Ben-Shaul, Yoram
2018-05-23
The vomeronasal system (VNS) is a major vertebrate chemosensory system that functions in parallel to the main olfactory system (MOS). Despite many similarities, the two systems dramatically differ in the temporal domain. While MOS responses are governed by breathing and follow a subsecond temporal scale, VNS responses are uncoupled from breathing and evolve over seconds. This suggests that the contribution of response dynamics to stimulus information will differ between these systems. While temporal dynamics in the MOS are widely investigated, similar analyses in the accessory olfactory bulb (AOB) are lacking. Here, we have addressed this issue using controlled stimulus delivery to the vomeronasal organ of male and female mice. We first analyzed the temporal properties of AOB projection neurons and demonstrated that neurons display prolonged, variable, and neuron-specific characteristics. We then analyzed various decoding schemes using AOB population responses. We showed that compared with the simplest scheme (i.e., integration of spike counts over the entire response period), the division of this period into smaller temporal bins actually yields poorer decoding accuracy. However, optimal classification accuracy can be achieved well before the end of the response period by integrating spike counts within temporally defined windows. Since VNS stimulus uptake is variable, we analyzed decoding using limited information about stimulus uptake time, and showed that with enough neurons, such time-invariant decoding is feasible. Finally, we conducted simulations that demonstrated that, unlike the main olfactory bulb, the temporal features of AOB neurons disfavor decoding with high temporal accuracy, and, rather, support decoding without precise knowledge of stimulus uptake time. SIGNIFICANCE STATEMENT A key goal in sensory system research is to identify which metrics of neuronal activity are relevant for decoding stimulus features. Here, we describe the first systematic analysis of temporal coding in the vomeronasal system (VNS), a chemosensory system devoted to socially relevant cues. Compared with the main olfactory system, timescales of VNS function are inherently slower and variable. Using various analyses of real and simulated data, we show that the consideration of response times relative to stimulus uptake can aid the decoding of stimulus information from neuronal activity. However, response properties of accessory olfactory bulb neurons favor decoding schemes that do not rely on the precise timing of stimulus uptake. Such schemes are consistent with the variable nature of VNS stimulus uptake. Copyright © 2018 the authors 0270-6474/18/384957-20$15.00/0.
Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks
2010-08-31
Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate
Padmanaban, Subash; Baker, Justin; Greger, Bradley
2018-01-01
Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding algorithm can be made robust by using optimal features and feature selection algorithms. We believe that even a few percent increase in performance is important and improves the decoding accuracy of the machine learning algorithm potentially increasing the ease of use of a brain machine interface. PMID:29467602
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive. PMID:28018162
Deep learning with convolutional neural networks for EEG decoding and visualization
Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-01-01
Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865
Deep learning with convolutional neural networks for EEG decoding and visualization.
Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-11-01
Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Global cortical activity predicts shape of hand during grasping
Agashe, Harshavardhan A.; Paek, Andrew Y.; Zhang, Yuhang; Contreras-Vidal, José L.
2015-01-01
Recent studies show that the amplitude of cortical field potentials is modulated in the time domain by grasping kinematics. However, it is unknown if these low frequency modulations persist and contain enough information to decode grasp kinematics in macro-scale activity measured at the scalp via electroencephalography (EEG). Further, it is unclear as to whether joint angle velocities or movement synergies are the optimal kinematics spaces to decode. In this offline decoding study, we infer from human EEG, hand joint angular velocities as well as synergistic trajectories as subjects perform natural reach-to-grasp movements. Decoding accuracy, measured as the correlation coefficient (r) between the predicted and actual movement kinematics, was r = 0.49 ± 0.02 across 15 hand joints. Across the first three kinematic synergies, decoding accuracies were r = 0.59 ± 0.04, 0.47 ± 0.06, and 0.32 ± 0.05. The spatial-temporal pattern of EEG channel recruitment showed early involvement of contralateral frontal-central scalp areas followed by later activation of central electrodes over primary sensorimotor cortical areas. Information content in EEG about the grasp type peaked at 250 ms after movement onset. The high decoding accuracies in this study are significant not only as evidence for time-domain modulation in macro-scale brain activity, but for the field of brain-machine interfaces as well. Our decoding strategy, which harnesses the neural “symphony” as opposed to local members of the neural ensemble (as in intracranial approaches), may provide a means of extracting information about motor intent for grasping without the need for penetrating electrodes and suggests that it may be soon possible to develop non-invasive neural interfaces for the control of prosthetic limbs. PMID:25914616
Tamm, Leanne; Denton, Carolyn A.; Epstein, Jeffery N.; Schatschneider, Christopher; Taylor, Heather; Arnold, L. Eugene; Bukstein, Oscar; Anixt, Julia; Koshy, Anson; Newman, Nicholas C.; Maltinsky, Jan; Brinson, Patricia; Loren, Richard; Prasad, Mary R.; Ewing-Cobbs, Linda; Vaughn, Aaron
2017-01-01
Objective This randomized clinical trial compared Attention-Deficit/Hyperactivity Disorder (ADHD) treatment alone, intensive reading intervention alone, and their combination for children with ADHD and word reading difficulties and disabilities (RD). Method Children (n=216; predominantly African American males) in grades 2–5 with ADHD and word reading/decoding deficits were randomized to ADHD treatment (carefully-managed medication+parent training), reading treatment (intensive reading instruction), or combined ADHD+reading treatment. Outcomes were parent and teacher ADHD ratings and measures of word reading/decoding. Analyses utilized a mixed models covariate-adjusted gain score approach with post-test regressed onto pretest and other predictors. Results Inattention and hyperactivity/impulsivity outcomes were significantly better in the ADHD (parent Hedges g=.87/.75; teacher g=.67/.50) and combined (parent g=1.06/.95; teacher g=.36/41) treatment groups than reading treatment alone; the ADHD and Combined groups did not differ significantly (parent g=.19/.20; teacher g=.31/.09). Word reading and decoding outcomes were significantly better in the reading (word reading g=.23; decoding g=.39) and combined (word reading g=.32; decoding g=.39) treatment groups than ADHD treatment alone; reading and combined groups did not differ (word reading g=.09; decoding g=.00). Significant group differences were maintained at the three- to five-month follow-up on all outcomes except word reading. Conclusions Children with ADHD and RD benefit from specific treatment of each disorder. ADHD treatment is associated with more improvement in ADHD symptoms than RD treatment, and reading instruction is associated with better word reading and decoding outcomes than ADHD treatment. The additive value of combining treatments was not significant within disorder, but the combination allows treating both disorders simultaneously. PMID:28333510
Tamm, Leanne; Denton, Carolyn A; Epstein, Jeffery N; Schatschneider, Christopher; Taylor, Heather; Arnold, L Eugene; Bukstein, Oscar; Anixt, Julia; Koshy, Anson; Newman, Nicholas C; Maltinsky, Jan; Brinson, Patricia; Loren, Richard E A; Prasad, Mary R; Ewing-Cobbs, Linda; Vaughn, Aaron
2017-05-01
This trial compared attention-deficit/hyperactivity disorder (ADHD) treatment alone, intensive reading intervention alone, and their combination for children with ADHD and word reading difficulties and disabilities (RD). Children (n = 216; predominantly African American males) in Grades 2-5 with ADHD and word reading/decoding deficits were randomized to ADHD treatment (medication + parent training), reading treatment (reading instruction), or combined ADHD + reading treatment. Outcomes were parent and teacher ADHD ratings and measures of word reading/decoding. Analyses utilized a mixed models covariate-adjusted gain score approach with posttest regressed onto pretest. Inattention and hyperactivity/impulsivity outcomes were significantly better in the ADHD (parent Hedges's g = .87/.75; teacher g = .67/.50) and combined (parent g = 1.06/.95; teacher g = .36/41) treatment groups than reading treatment alone; the ADHD and Combined groups did not differ significantly (parent g = .19/.20; teacher g = .31/.09). Word reading and decoding outcomes were significantly better in the reading (word reading g = .23; decoding g = .39) and combined (word reading g = .32; decoding g = .39) treatment groups than ADHD treatment alone; reading and combined groups did not differ (word reading g = .09; decoding g = .00). Significant group differences were maintained at the 3- to 5-month follow-up on all outcomes except word reading. Children with ADHD and RD benefit from specific treatment of each disorder. ADHD treatment is associated with more improvement in ADHD symptoms than RD treatment, and reading instruction is associated with better word reading and decoding outcomes than ADHD treatment. The additive value of combining treatments was not significant within disorder, but the combination allows treating both disorders simultaneously. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Improving zero-training brain-computer interfaces by mixing model estimators
NASA Astrophysics Data System (ADS)
Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.
2017-06-01
Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne; Colé, Pascale
2013-01-01
Background The literature suggests that a complex relationship exists between the three main skills involved in reading comprehension (decoding, listening comprehension and vocabulary) and that this relationship depends on at least three other factors orthographic transparency, children’s grade level and socioeconomic status (SES). This study investigated the relative contribution of the predictors of reading comprehension in a longitudinal design (from beginning to end of the first grade) in 394 French children from low SES families. Methodology/Principal findings Reading comprehension was measured at the end of the first grade using two tasks one with short utterances and one with a medium length narrative text. Accuracy in listening comprehension and vocabulary, and fluency of decoding skills, were measured at the beginning and end of the first grade. Accuracy in decoding skills was measured only at the beginning. Regression analyses showed that listening comprehension and decoding skills (accuracy and fluency) always significantly predicted reading comprehension. The contribution of decoding was greater when reading comprehension was assessed via the task using short utterances. Between the two assessments, the contribution of vocabulary, and of decoding skills especially, increased, while that of listening comprehension remained unchanged. Conclusion/Significance These results challenge the ‘simple view of reading’. They also have educational implications, since they show that it is possible to assess decoding and reading comprehension very early on in an orthography (i.e., French), which is less deep than the English one even in low SES children. These assessments, associated with those of listening comprehension and vocabulary, may allow early identification of children at risk for reading difficulty, and to set up early remedial training, which is the most effective, for them. PMID:24250802
Neuroprosthetic Decoder Training as Imitation Learning
Merel, Josh; Paninski, Liam; Cunningham, John P.
2016-01-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387
Shen, Guohua; Zhang, Jing; Wang, Mengxing; Lei, Du; Yang, Guang; Zhang, Shanmin; Du, Xiaoxia
2014-06-01
Multivariate pattern classification analysis (MVPA) has been applied to functional magnetic resonance imaging (fMRI) data to decode brain states from spatially distributed activation patterns. Decoding upper limb movements from non-invasively recorded human brain activation is crucial for implementing a brain-machine interface that directly harnesses an individual's thoughts to control external devices or computers. The aim of this study was to decode the individual finger movements from fMRI single-trial data. Thirteen healthy human subjects participated in a visually cued delayed finger movement task, and only one slight button press was performed in each trial. Using MVPA, the decoding accuracy (DA) was computed separately for the different motor-related regions of interest. For the construction of feature vectors, the feature vectors from two successive volumes in the image series for a trial were concatenated. With these spatial-temporal feature vectors, we obtained a 63.1% average DA (84.7% for the best subject) for the contralateral primary somatosensory cortex and a 46.0% average DA (71.0% for the best subject) for the contralateral primary motor cortex; both of these values were significantly above the chance level (20%). In addition, we implemented searchlight MVPA to search for informative regions in an unbiased manner across the whole brain. Furthermore, by applying searchlight MVPA to each volume of a trial, we visually demonstrated the information for decoding, both spatially and temporally. The results suggest that the non-invasive fMRI technique may provide informative features for decoding individual finger movements and the potential of developing an fMRI-based brain-machine interface for finger movement. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hammer, Jiri; Fischer, Jörg; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio
2013-01-01
In neuronal population signals, including the electroencephalogram (EEG) and electrocorticogram (ECoG), the low-frequency component (LFC) is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI) applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold) and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy) “copy” of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative LFC of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance. PMID:24198757
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrek, Miljko; Albright, Austin P
This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.
VLSI architecture for a Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor)
1992-01-01
A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Decoding word and category-specific spatiotemporal representations from MEG and EEG
Chan, Alexander M.; Halgren, Eric; Marinkovic, Ksenija; Cash, Sydney S.
2010-01-01
The organization and localization of lexico-semantic information in the brain has been debated for many years. Specifically, lesion and imaging studies have attempted to map the brain areas representing living versus non-living objects, however, results remain variable. This may be due, in part, to the fact that the univariate statistical mapping analyses used to detect these brain areas are typically insensitive to subtle, but widespread, effects. Decoding techniques, on the other hand, allow for a powerful multivariate analysis of multichannel neural data. In this study, we utilize machine-learning algorithms to first demonstrate that semantic category, as well as individual words, can be decoded from EEG and MEG recordings of subjects performing a language task. Mean accuracies of 76% (chance = 50%) and 83% (chance = 20%) were obtained for the decoding of living vs. non-living category or individual words respectively. Furthermore, we utilize this decoding analysis to demonstrate that the representations of words and semantic category are highly distributed both spatially and temporally. In particular, bilateral anterior temporal, bilateral inferior frontal, and left inferior temporal-occipital sensors are most important for discrimination. Successful intersubject and intermodality decoding shows that semantic representations between stimulus modalities and individuals are reasonably consistent. These results suggest that both word and category-specific information are present in extracranially recorded neural activity and that these representations may be more distributed, both spatially and temporally, than previous studies suggest. PMID:21040796
Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model
Zhang, Cheng
2016-01-01
The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram) data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square) algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%. PMID:28058044
A VLSI decomposition of the deBruijn graph
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Mceliece, R.; Pollara, F.
1990-01-01
A new Viterbi decoder for convolutional codes with constraint lengths up to 15, called the Big Viterbi Decoder, is under development for the Deep Space Network. It will be demonstrated by decoding data from the Galileo spacecraft, which has a rate 1/4, constraint-length 15 convolutional encoder on board. Here, the mathematical theory underlying the design of the very-large-scale-integrated (VLSI) chips that are being used to build this decoder is explained. The deBruijn graph B sub n describes the topology of a fully parallel, rate 1/v, constraint length n+2 Viterbi decoder, and it is shown that B sub n can be built by appropriately wiring together (i.e., connecting together with extra edges) many isomorphic copies of a fixed graph called a B sub n building block. The efficiency of such a building block is defined as the fraction of the edges in B sub n that are present in the copies of the building block. It is shown, among other things, that for any alpha less than 1, there exists a graph G which is a B sub n building block of efficiency greater than alpha for all sufficiently large n. These results are illustrated by describing a special hierarchical family of deBruijn building blocks, which has led to the design of the gate-array chips being used in the Big Viterbi Decoder.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Dissociable roles of internal feelings and face recognition ability in facial expression decoding.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
2016-05-15
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.
Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming
2017-10-20
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D; Correia, Cristina; Li, Hu
2016-06-02
The sequential chain of interactions altering the binary state of a biomolecule represents the 'information flow' within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein-protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes-network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kim, Yong-Hee; Thakor, Nitish V; Schieber, Marc H; Kim, Hyoung-Nam
2015-05-01
Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance.
Kim, Yong-Hee; Thakor, Nitish V.; Schieber, Marc H.; Kim, Hyoung-Nam
2015-01-01
Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance. PMID:25347884
Decoding visual object categories from temporal correlations of ECoG signals.
Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu
2014-04-15
How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.
Dynamics of intracellular information decoding.
Kobayashi, Tetsuya J; Kamimura, Atsushi
2011-10-01
A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity.
Pierce, Paul E.
1986-01-01
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Gates, Louis
2018-04-01
The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization-this generalization unifies the decoding cells (97% transparency). Deeper, the periodic table and singularity generalization together highlight the connectivity of the periodic cells. Moreover, these interrelated cells, coupled with the singularity generalization, clarify teaching targets and enable efficient learning of the letter-sound code. This singularity generalization, in turn, serves as a model for creating unified but easily stated subordinate generalizations for any one of the transparent cells or groups of cells shown within the tables. The article then expands the periodic cells into two tables of teacher-ready sample word lists-one table includes sample words for the basic and phonogram vowel cells, and the other table embraces word samples for the transparent consonant cells. The paper concludes with suggestions for teaching the cellular transparency embedded within reoccurring isolated words and running text to promote decoding automaticity of the periodic cells.
Potocki, Anna; Magnan, Annie; Ecalle, Jean
2015-01-01
Four groups of poor readers were identified among a population of students with learning disabilities attending a special class in secondary school: normal readers; specific poor decoders; specific poor comprehenders, and general poor readers (deficits in both decoding and comprehension). These students were then trained with a software program designed to encourage either their word decoding skills or their text comprehension skills. After 5 weeks of training, we observed that the students experiencing word reading deficits and trained with the decoding software improved primarily in the reading fluency task while those exhibiting comprehension deficits and trained with the comprehension software showed improved performance in listening and reading comprehension. But interestingly, the latter software also led to improved performance on the word recognition task. This result suggests that, for these students, training interventions focused at the text level and its comprehension might be more beneficial for reading in general (i.e., for the two components of reading) than word-level decoding trainings. Copyright © 2015 Elsevier Ltd. All rights reserved.
Low Cost SoC Design of H.264/AVC Decoder for Handheld Video Player
NASA Astrophysics Data System (ADS)
Wisayataksin, Sumek; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki
We propose a low cost and stand-alone platform-based SoC for H.264/AVC decoder, whose target is practical mobile applications such as a handheld video player. Both low cost and stand-alone solutions are particularly emphasized. The SoC, consisting of RISC core and decoder core, has advantages in terms of flexibility, testability and various I/O interfaces. For decoder core design, the proposed H.264/AVC coprocessor in the SoC employs a new block pipelining scheme instead of a conventional macroblock or a hybrid one, which greatly contribute to reducing drastically the size of the core and its pipelining buffer. In addition, the decoder schedule is optimized to block level which is easy to be programmed. Actually, the core size is reduced to 138 KGate with 3.5 kbyte memory. In our practical development, a single external SDRAM is sufficient for both reference frame buffer and display buffer. Various peripheral interfaces such as a compact flash, a digital broadcast receiver and a LCD driver are also provided on a chip.
White matter microstructure integrity in relation to reading proficiency☆.
Nikki Arrington, C; Kulesz, Paulina A; Juranek, Jenifer; Cirino, Paul T; Fletcher, Jack M
2017-11-01
Components of reading proficiency such asaccuracy, fluency, and comprehension require the successful coordination of numerous, yet distinct, cortical regions. Underlying white matter tracts allow for communication among these regions. This study utilized unique residualized tract - based spatial statistics methodology to identify the relations of white matter microstructure integrity to three components of reading proficiency in 49 school - aged children with typically developing phonological decoding skills and 27 readers with poor decoders. Results indicated that measures of white matter integrity were differentially associated with components of reading proficiency. In both typical and poor decoders, reading comprehension correlated with measures of integrity of the right uncinate fasciculus; reading comprehension was also related to the left inferior longitudinal fasciculus in poor decoders. Also in poor decoders, word reading fluency was related to the right uncinate and left inferior fronto - occipital fasciculi. Word reading was unrelated to white matter integrity in either group. These findings expand our knowledge of the association between white matter integrity and different elements of reading proficiency. Copyright © 2017 Elsevier Inc. All rights reserved.
Continuous movement decoding using a target-dependent model with EMG inputs.
Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J
2011-01-01
Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
NASA Astrophysics Data System (ADS)
Menz, Veera Katharina; Schaffelhofer, Stefan; Scherberger, Hansjörg
2015-10-01
Objective. In the last decade, multiple brain areas have been investigated with respect to their decoding capability of continuous arm or hand movements. So far, these studies have mainly focused on motor or premotor areas like M1 and F5. However, there is accumulating evidence that anterior intraparietal area (AIP) in the parietal cortex also contains information about continuous movement. Approach. In this study, we decoded 27 degrees of freedom representing complete hand and arm kinematics during a delayed grasping task from simultaneously recorded activity in areas M1, F5, and AIP of two macaque monkeys (Macaca mulatta). Main results. We found that all three areas provided decoding performances that lay significantly above chance. In particular, M1 yielded highest decoding accuracy followed by F5 and AIP. Furthermore, we provide support for the notion that AIP does not only code categorical visual features of objects to be grasped, but also contains a substantial amount of temporal kinematic information. Significance. This fact could be utilized in future developments of neural interfaces restoring hand and arm movements.
Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.
2016-01-01
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018
Ensemble cryo-EM elucidates the mechanism of translation fidelity
Loveland, Anna B.; Demo, Gabriel; Grigorieff, Nikolaus; Korostelev, Andrei A.
2017-01-01
SUMMARY Faithful gene translation depends on accurate decoding, whose structural mechanism remains a matter of debate. Ribosomes decode mRNA codons by selecting cognate aminoacyl-tRNAs delivered by EF-Tu. We present high-resolution structural ensembles of ribosomes with cognate or near-cognate aminoacyl-tRNAs delivered by EF-Tu. Both cognate and near-cognate tRNA anticodons explore the A site of an open 30S subunit, while inactive EF-Tu is separated from the 50S subunit. A transient conformation of decoding-center nucleotide G530 stabilizes the cognate codon-anticodon helix, initiating step-wise “latching” of the decoding center. The resulting 30S domain closure docks EF-Tu at the sarcin-ricin loop of the 50S subunit, activating EF-Tu for GTP hydrolysis and ensuing aminoacyl-tRNA accommodation. By contrast, near-cognate complexes fail to induce the G530 latch, thus favoring open 30S pre-accommodation intermediates with inactive EF-Tu. This work unveils long-sought structural differences between the pre-accommodation of cognate and near-cognate tRNA that elucidate the mechanism of accurate decoding. PMID:28538735
Pierce, P.E.
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Decoding Intention at Sensorimotor Timescales
Salvaris, Mathew; Haggard, Patrick
2014-01-01
The ability to decode an individual's intentions in real time has long been a ‘holy grail’ of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered. PMID:24523855
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
Bayesian population decoding of spiking neurons.
Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias
2009-01-01
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Decoding human mental states by whole-head EEG+fNIRS during category fluency task performance
NASA Astrophysics Data System (ADS)
Omurtag, Ahmet; Aghajani, Haleh; Onur Keles, Hasan
2017-12-01
Objective. Concurrent scalp electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), which we refer to as EEG+fNIRS, promises greater accuracy than the individual modalities while remaining nearly as convenient as EEG. We sought to quantify the hybrid system’s ability to decode mental states and compare it with its unimodal components. Approach. We recorded from healthy volunteers taking the category fluency test and applied machine learning techniques to the data. Main results. EEG+fNIRS’s decoding accuracy was greater than that of its subsystems, partly due to the new type of neurovascular features made available by hybrid data. Significance. Availability of an accurate and practical decoding method has potential implications for medical diagnosis, brain-computer interface design, and neuroergonomics.
Systolic array processing of the sequential decoding algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Yao, K.
1989-01-01
A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
FEC decoder design optimization for mobile satellite communications
NASA Technical Reports Server (NTRS)
Roy, Ashim; Lewi, Leng
1990-01-01
A new telecommunications service for location determination via satellite is being proposed for the continental USA and Europe, which provides users with the capability to find the location of, and communicate from, a moving vehicle to a central hub and vice versa. This communications system is expected to operate in an extremely noisy channel in the presence of fading. In order to achieve high levels of data integrity, it is essential to employ forward error correcting (FEC) encoding and decoding techniques in such mobile satellite systems. A constraint length k = 7 FEC decoder has been implemented in a single chip for such systems. The single chip implementation of the maximum likelihood decoder helps to minimize the cost, size, and power consumption, and improves the bit error rate (BER) performance of the mobile earth terminal (MET).
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
Coding and decoding in a point-to-point communication using the polarization of the light beam.
Kavehvash, Z; Massoumian, F
2008-05-10
A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
libgapmis: extending short-read alignments
2013-01-01
Background A wide variety of short-read alignment programmes have been published recently to tackle the problem of mapping millions of short reads to a reference genome, focusing on different aspects of the procedure such as time and memory efficiency, sensitivity, and accuracy. These tools allow for a small number of mismatches in the alignment; however, their ability to allow for gaps varies greatly, with many performing poorly or not allowing them at all. The seed-and-extend strategy is applied in most short-read alignment programmes. After aligning a substring of the reference sequence against the high-quality prefix of a short read--the seed--an important problem is to find the best possible alignment between a substring of the reference sequence succeeding and the remaining suffix of low quality of the read--extend. The fact that the reads are rather short and that the gap occurrence frequency observed in various studies is rather low suggest that aligning (parts of) those reads with a single gap is in fact desirable. Results In this article, we present libgapmis, a library for extending pairwise short-read alignments. Apart from the standard CPU version, it includes ultrafast SSE- and GPU-based implementations. libgapmis is based on an algorithm computing a modified version of the traditional dynamic-programming matrix for sequence alignment. Extensive experimental results demonstrate that the functions of the CPU version provided in this library accelerate the computations by a factor of 20 compared to other programmes. The analogous SSE- and GPU-based implementations accelerate the computations by a factor of 6 and 11, respectively, compared to the CPU version. The library also provides the user the flexibility to split the read into fragments, based on the observed gap occurrence frequency and the length of the read, thereby allowing for a variable, but bounded, number of gaps in the alignment. Conclusions We present libgapmis, a library for extending pairwise short-read alignments. We show that libgapmis is better-suited and more efficient than existing algorithms for this task. The importance of our contribution is underlined by the fact that the provided functions may be seamlessly integrated into any short-read alignment pipeline. The open-source code of libgapmis is available at http://www.exelixis-lab.org/gapmis. PMID:24564250
DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval
Wang, Fuzhou; Kream, Richard M.
2018-01-01
Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon “chips” and “cloud” storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA’s great potential for large data storage in a ‘smaller’ space. PMID:29481548
Securing information display by use of visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2003-09-01
We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
Linear-time general decoding algorithm for the surface code
NASA Astrophysics Data System (ADS)
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
Decoding the Disciplines as a Hermeneutic Practice
ERIC Educational Resources Information Center
Yeo, Michelle
2017-01-01
This chapter argues that expert practice is an inquiry that surfaces a hermeneutic relationship between theory, practice, and the world, with implications for new lines of questioning in the Decoding interview.