Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks
NASA Astrophysics Data System (ADS)
Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla
2016-12-01
Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes
NASA Technical Reports Server (NTRS)
Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.
1997-01-01
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Good trellises for IC implementation of viterbi decoders for linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.
1996-01-01
This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
Decoding the Voluntourism Process: A Case Study of the Pay It Forward Tour
ERIC Educational Resources Information Center
Bailey, Andrew W.; Fernando, Irene K.
2011-01-01
"Voluntourism" refers to the use of "discretionary time and income to travel out of the sphere of regular activity to assist others in need" (McGehee & Santos, 2005, p. 760). These experiences have been shown to raise consciousness and increase interest in activism (McGehee, 2002; Wearing, 2001) and to build pro-social…
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Deep Learning Methods for Improved Decoding of Linear Codes
NASA Astrophysics Data System (ADS)
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Viterbi decoding for satellite and space communication.
NASA Technical Reports Server (NTRS)
Heller, J. A.; Jacobs, I. M.
1971-01-01
Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.
Robust pattern decoding in shape-coded structured light
NASA Astrophysics Data System (ADS)
Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai
2017-09-01
Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
A low-complexity Reed-Solomon decoder using new key equation solver
NASA Astrophysics Data System (ADS)
Xie, Jun; Yuan, Songxin; Tu, Xiaodong; Zhang, Chongfu
2006-09-01
This paper presents a low-complexity parallel Reed-Solomon (RS) (255,239) decoder architecture using a novel pipelined variable stages recursive Modified Euclidean (ME) algorithm for optical communication. The pipelined four-parallel syndrome generator is proposed. The time multiplexing and resource sharing schemes are used in the novel recursive ME algorithm to reduce the logic gate count. The new key equation solver can be shared by two decoder macro. A new Chien search cell which doesn't need initialization is proposed in the paper. The proposed decoder can be used for 2.5Gb/s data rates device. The decoder is implemented in Altera' Stratixll device. The resource utilization is reduced about 40% comparing to the conventional method.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
ERIC Educational Resources Information Center
Squires, Katie Ellen
2013-01-01
This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Extracting duration information in a picture category decoding task using hidden Markov Models
NASA Astrophysics Data System (ADS)
Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg
2016-04-01
Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Word and Person Effects on Decoding Accuracy: A New Look at an Old Question
Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.
2011-01-01
The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was comprised of 196 English-speaking grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure. Because grapheme-phoneme correspondence (GPC) knowledge was treated as person and word specific, we are able to conclude that it is neither necessary nor sufficient for a student to know all GPCs in a word before accurately decoding the word. And controlling for word-specific GPC knowledge, students with lower phonemic awareness and slower rapid naming skill have lower predicted probabilities of correct decoding than counterparts with superior skills. By assessing a person-by-word interaction, we found that students with lower phonemic awareness have more difficulty applying knowledge of complex vowel graphemes compared to complex consonant graphemes when decoding unfamiliar words. Implications of the methodology and results are discussed in light of future research. PMID:21743750
A high speed sequential decoder
NASA Technical Reports Server (NTRS)
Lum, H., Jr.
1972-01-01
The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
Müller-Putz, G R; Schwarz, A; Pereira, J; Ofner, P
2016-01-01
In this chapter, we give an overview of the Graz-BCI research, from the classic motor imagery detection to complex movement intentions decoding. We start by describing the classic motor imagery approach, its application in tetraplegic end users, and the significant improvements achieved using coadaptive brain-computer interfaces (BCIs). These strategies have the drawback of not mirroring the way one plans a movement. To achieve a more natural control-and to reduce the training time-the movements decoded by the BCI need to be closely related to the user's intention. Within this natural control, we focus on the kinematic level, where movement direction and hand position or velocity can be decoded from noninvasive recordings. First, we review movement execution decoding studies, where we describe the decoding algorithms, their performance, and associated features. Second, we describe the major findings in movement imagination decoding, where we emphasize the importance of estimating the sources of the discriminative features. Third, we introduce movement target decoding, which could allow the determination of the target without knowing the exact movement-by-movement details. Aside from the kinematic level, we also address the goal level, which contains relevant information on the upcoming action. Focusing on hand-object interaction and action context dependency, we discuss the possible impact of some recent neurophysiological findings in the future of BCI control. Ideally, the goal and the kinematic decoding would allow an appropriate matching of the BCI to the end users' needs, overcoming the limitations of the classic motor imagery approach. © 2016 Elsevier B.V. All rights reserved.
Decoding DNA labels by melting curve analysis using real-time PCR.
Balog, József A; Fehér, Liliána Z; Puskás, László G
2017-12-01
Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.
The ribosome as an optimal decoder: a lesson in molecular recognition.
Savir, Yonatan; Tlusty, Tsvi
2013-04-11
The ribosome is a complex molecular machine that, in order to synthesize proteins, has to decode mRNAs by pairing their codons with matching tRNAs. Decoding is a major determinant of fitness and requires accurate and fast selection of correct tRNAs among many similar competitors. However, it is unclear whether the modern ribosome, and in particular its large conformational changes during decoding, are the outcome of adaptation to its task as a decoder or the result of other constraints. Here, we derive the energy landscape that provides optimal discrimination between competing substrates and thereby optimal tRNA decoding. We show that the measured landscape of the prokaryotic ribosome is sculpted in this way. This model suggests that conformational changes of the ribosome and tRNA during decoding are means to obtain an optimal decoder. Our analysis puts forward a generic mechanism that may be utilized broadly by molecular recognition systems. Copyright © 2013 Elsevier Inc. All rights reserved.
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Great circle solution to polarization-based quantum communication (QC) in optical fiber
Nordholt, Jane Elizabeth; Peterson, Charles Glen; Newell, Raymond Thorson; Hughes, Richard John
2016-03-15
Birefringence in optical fibers is compensated by applying polarization modulation at a receiver. Polarization modulation is applied so that a transmitted optical signal has states of polarization (SOPs) that are equally spaced on the Poincare sphere. Fiber birefringence encountered in propagation between a transmitter and a receiver rotates the great circle on the Poincare sphere that represents the polarization bases used for modulation. By adjusting received polarizations, polarization components of the received optical signal can be directed to corresponding detectors for decoding, regardless of the magnitude and orientation of the fiber birefringence. A transmitter can be configured to transmit in conjugate polarization bases whose SOPs can be represented as equidistant points on a great circle so that the received SOPs are mapped to equidistant points on a great circle and routed to corresponding detectors.
Shen, Laifa; Yu, Le; Yu, Xin-Yao; Zhang, Xiaogang; Lou, Xiong Wen David
2015-02-02
Despite the significant advancement in preparing metal oxide hollow structures, most approaches rely on template-based multistep procedures for tailoring the interior structure. In this work, we develop a new generally applicable strategy toward the synthesis of mixed-metal-oxide complex hollow spheres. Starting with metal glycerate solid spheres, we show that subsequent thermal annealing in air leads to the formation of complex hollow spheres of the resulting metal oxide. We demonstrate the concept by synthesizing highly uniform NiCo2O4 hollow spheres with a complex interior structure. With the small primary building nanoparticles, high structural integrity, complex interior architectures, and enlarged surface area, these unique NiCo2O4 hollow spheres exhibit superior electrochemical performances as advanced electrode materials for both lithium-ion batteries and supercapacitors. This approach can be an efficient self-templated strategy for the preparation of mixed-metal-oxide hollow spheres with complex interior structures and functionalities. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A high data rate universal lattice decoder on FPGA
NASA Astrophysics Data System (ADS)
Ma, Jing; Huang, Xinming; Kura, Swapna
2005-06-01
This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
Klein, Mike E.; Zatorre, Robert J.
2015-01-01
In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP. PMID:24488957
A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
2001-01-01
Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, A.C.
1982-01-01
The paramagnetic divalent cation cobalt has large and well-understood effects on NMR signals from ligands bound in the first coordination sphere, i.e., inner-sphere ligands, and the authors have used these effects to identify divalent cation binding sites at the surface of phosphatidylserine membranes. /sup 31/P NMR results show that 13% of the bound cobalt ions are involved in inner-sphere complexes with the phosphodiester group, while /sup 13/C NMR results show that 54% of the bound cobalt ions are involved in unidentate inner sphere complexes with the carboxyl group. No evidence is found for cobalt binding to the carbonyl groups, butmore » proton release studies suggest that 32% of the bound cobalt ions are involved in chelate complexes that contain both the carboxyl and the amine groups. All of the bound cobalt ions can thus be accounted for in terms of inner sphere complexes with the phosphodiester group or the carboxyl group. They suggest that the unidentate inner-sphere complex between cobalt and the carboxyl group of phosphatidylserine and the inner-sphere complex between cobalt and the phosphodiester group of phosphatidylserine provide reasonable models for complexes between alkaline earth cations and phosphatidylserine membranes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, A.C.
1982-09-28
The paramagnetic divalent cation cobalt has large and well-understood effects on NMR signals from ligands bound in the first coordination sphere, i.e., inner-sphere ligands, and we have used these effects to identify divalent cation binding sites at the surface of phosphatidylserine membranes. /sup 31/P NMR results show that 13% of the bound cobalt ions are involved in inner-sphere complexes with the phosphodiester group, while /sup 13/C NMR results show that 54% of the bound cobalt ions are involved in unidentate inner sphere complexes with the carboxyl group. No evidence is found for cobalt binding to the carbonyl groups, but protonmore » release studies suggest that 32% of the bound cobalt ions are involved in chelate complexes that contain both the carboxyl and the amine groups. All (i.e., 13% + 54% + 32% = 99%) of the bound cobalt ions can thus be accounted for in terms of inner sphere complexes with the phosphodiester group or the carboxyl group. We suggest that the unidentate inner-sphere complex between cobalt and the carboxyl group of phosphatidylserine and the inner-sphere complex between cobalt and the phosphodiester group of phosphatidylserine provide reasonable models for complexes between alkaline earth cations and phosphatidylserine membranes.« less
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Nonlinear decoding of a complex movie from the mammalian retina
Deny, Stéphane; Martius, Georg
2018-01-01
Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
High-Speed Soft-Decision Decoding of Two Reed-Muller Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.
1996-01-01
In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RNI subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a hi ch data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.
High-Speed Soft-Decision Decoding of Two Reed-Muller Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.
1996-01-01
In this research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing, a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study, which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating, and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these sub-trellises.
Testing interconnected VLSI circuits in the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.
Adsorption mechanisms of selenium oxyanions at the aluminum oxide/water interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peak, Derek
2008-06-09
Sorption processes at the mineral/water interface typically control the mobility and bioaccessibility of many inorganic contaminants such as oxyanions. Selenium is an important micronutrient for human and animal health, but at elevated concentrations selenium toxicity is a concern. The objective of this study was to determine the bonding mechanisms of selenate (SeO{sub 4}{sup 2-}) and selenite (SeO{sub 3}{sup 2-}) on hydrous aluminum oxide (HAO) over a wide range of reaction pH using extended X-ray absorption fine structure (EXAFS) spectroscopy. Additionally, selenate adsorption on corundum ({alpha}-Al{sub 2}O{sub 3}) was studied to determine if adsorption mechanisms change as the aluminum oxide surfacemore » structure changes. The overall findings were that selenite forms a mixture of outer-sphere and inner-sphere bidentate-binuclear (corner-sharing) surface complexes on HAO, selenate forms primarily outer-sphere surface complexes on HAO, and on corundum selenate forms outer-sphere surface complexes at pH 3.5 but inner-sphere monodentate surface complexes at pH 4.5 and above. It is possible that the lack of inner-sphere complex formation at pH 3.5 is caused by changes in the corundum surface at low pH or secondary precipitate formation. The results are consistent with a structure-based reactivity for metal oxides, wherein hydrous metal oxides form outer-sphere complexes with sulfate and selenate, but inner-sphere monodentate surface complexes are formed between sulfate and selenate and {alpha}-Me{sub 2}O{sub 3}.« less
Decoding Ca2+ signals in plants
NASA Technical Reports Server (NTRS)
Sathyanarayanan, P. V.; Poovaiah, B. W.
2004-01-01
Different input signals create their own characteristic Ca2+ fingerprints. These fingerprints are distinguished by frequency, amplitude, duration, and number of Ca2+ oscillations. Ca(2+)-binding proteins and protein kinases decode these complex Ca2+ fingerprints through conformational coupling and covalent modifications of proteins. This decoding of signals can lead to a physiological response with or without changes in gene expression. In plants, Ca(2+)-dependent protein kinases and Ca2+/calmodulin-dependent protein kinases are involved in decoding Ca2+ signals into phosphorylation signals. This review summarizes the elements of conformational coupling and molecular mechanisms of regulation of the two groups of protein kinases by Ca2+ and Ca2+/calmodulin in plants.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, H.M.; Reed, I.S.
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less
An Optimized Three-Level Design of Decoder Based on Nanoscale Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Seyedi, Saeid; Navimipour, Nima Jafari
2018-03-01
Quantum-dot Cellular Automata (QCA) has been potentially considered as a supersede to Complementary Metal-Oxide-Semiconductor (CMOS) because of its inherent advantages. Many QCA-based logic circuits with smaller feature size, improved operating frequency, and lower power consumption than CMOS have been offered. This technology works based on electron relations inside quantum-dots. Due to the importance of designing an optimized decoder in any digital circuit, in this paper, we design, implement and simulate a new 2-to-4 decoder based on QCA with low delay, area, and complexity. The logic functionality of the 2-to-4 decoder is verified using the QCADesigner tool. The results have shown that the proposed QCA-based decoder has high performance in terms of a number of cells, covered area, and time delay. Due to the lower clock pulse frequency, the proposed 2-to-4 decoder is helpful for building QCA-based sequential digital circuits with high performance.
Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V
2016-07-01
Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task
NASA Astrophysics Data System (ADS)
Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.
2014-12-01
Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.
Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A
2014-12-01
To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Universal Decoder for PPM of any Order
NASA Technical Reports Server (NTRS)
Moision, Bruce E.
2010-01-01
A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory.
Ng, Tse Nga; Schwartz, David E; Lavery, Leah L; Whiting, Gregory L; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer
2012-01-01
Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic.
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
Grynberg, Delphine; Maurage, Pierre; Nandrino, Jean-Louis
2017-04-01
Prior research has repeatedly shown that alcohol dependence is associated with a large range of impairments in psychological processes, which could lead to interpersonal deficits. Specifically, it has been suggested that these interpersonal difficulties are underpinned by reduced recognition and sharing of others' emotional states. However, this pattern of deficits remains to be clarified. This study thus aimed to investigate whether alcohol dependence is associated with impaired abilities in decoding contextual complex emotions and with altered sharing of others' emotions. Forty-one alcohol-dependent individuals (ADI) and 37 matched healthy individuals completed the Multifaceted Empathy Test, in which they were instructed to identify complex emotional states expressed by individuals in contextual scenes and to state to what extent they shared them. Compared to healthy individuals, ADI were impaired in identifying negative (Cohen's d = 0.75) and positive (Cohen's d = 0.46) emotional states but, conversely, presented preserved abilities in sharing others' emotional states. This study shows that alcohol dependence is characterized by an impaired ability to decode complex emotional states (both positive and negative), despite the presence of complementary contextual cues, but by preserved emotion-sharing. Therefore, these results extend earlier data describing an impaired ability to decode noncontextualized emotions toward contextualized and ecologically valid emotional states. They also indicate that some essential emotional competences such as emotion-sharing are preserved in alcohol dependence, thereby offering potential therapeutic levers. Copyright © 2017 by the Research Society on Alcoholism.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Simultaneous real-time monitoring of multiple cortical systems.
Gupta, Disha; Jeremy Hill, N; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L; Schalk, Gerwin
2014-10-01
Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic.
Simultaneous Real-Time Monitoring of Multiple Cortical Systems
Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin
2014-01-01
Objective Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor, or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach We study these questions using electrocorticographic (ECoG) signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (6 for offline parameter optimization, 6 for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelope. These decoders were trained separately and executed simultaneously in real time. Significance This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic. PMID:25080161
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
2015-04-01
recently decoded a major conserved route that mTORC1 uses to control autophagy. These studies demonstrate that mTORC1 inactivates another kinase complex...inhibition, and 2) to further explore use of novel small molecule inhibitors of ULK1 to synergize with mTOR inhibitors to induce cell death. 15. SUBJECT...others have recently decoded a major conserved route that mTORC1 uses to control autophagy. These studies demonstrate that mTORC1 inactivates another
On accuracy, privacy, and complexity in the identification problem
NASA Astrophysics Data System (ADS)
Beekhof, F.; Voloshynovskiy, S.; Koval, O.; Holotyak, T.
2010-02-01
This paper presents recent advances in the identification problem taking into account the accuracy, complexity and privacy leak of different decoding algorithms. Using a model of different actors from literature, we show that it is possible to use more accurate decoding algorithms using reliability information without increasing the privacy leak relative to algorithms that only use binary information. Existing algorithms from literature have been modified to take advantage of reliability information, and we show that a proposed branch-and-bound algorithm can outperform existing work, including the enhanced variants.
Circuit Design Approaches for Implementation of a Subtrellis IC for a Reed-Muller Subcode
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.; Nakamura, Eric B.; Chu, Cecilia W. P.
1996-01-01
In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second(Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high- speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these subtrellises.
Circuit Design Approaches for Implementation of a Subtrellis IC for a Reed-Muller Subcode
NASA Technical Reports Server (NTRS)
Lin, Shu; Uehara, Gregory T.; Nakamura, Eric B.; Chu, Cecilia W. P.
1996-01-01
In this research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RM subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a high data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to implement the system at high speed. Second, we will describe details of the 8-trellis diagram we found to best meet the trade-offs between chip and overall system complexity. The chosen approach implements the trellis for the (64, 40, 8) RM subcode with 32 independent sub-trellises. And third, we will describe results of our feasibility study on the implementation of such an IC chip in CMOS technology to implement one of these subtrellises.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory
Ng, Tse Nga; Schwartz, David E.; Lavery, Leah L.; Whiting, Gregory L.; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer
2012-01-01
Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic. PMID:22900143
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, Howard M.; Reed, Irving S.
1988-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
Distributed Coding of Compressively Sensed Sources
NASA Astrophysics Data System (ADS)
Goukhshtein, Maxim
In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.
Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.
Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo
2017-05-01
In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Strong and weak adsorptions of polyelectrolyte chains onto oppositely charged spheres
NASA Astrophysics Data System (ADS)
Cherstvy, A. G.; Winkler, R. G.
2006-08-01
We investigate the complexation of long thin polyelectrolyte (PE) chains with oppositely charged spheres. In the limit of strong adsorption, when strongly charged PE chains adapt a definite wrapped conformation on the sphere surface, we analytically solve the linear Poisson-Boltzmann equation and calculate the electrostatic potential and the energy of the complex. We discuss some biological applications of the obtained results. For weak adsorption, when a flexible weakly charged PE chain is localized next to the sphere in solution, we solve the Edwards equation for PE conformations in the Hulthén potential, which is used as an approximation for the screened Debye-Hückel potential of the sphere. We predict the critical conditions for PE adsorption. We find that the critical sphere charge density exhibits a distinctively different dependence on the Debye screening length than for PE adsorption onto a flat surface. We compare our findings with experimental measurements on complexation of various PEs with oppositely charged colloidal particles. We also present some numerical results of the coupled Poisson-Boltzmann and self-consistent field equation for PE adsorption in an assembly of oppositely charged spheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHERTKOV, MICHAEL; STEPANOV, MIKHAIL
2007-01-10
The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Concatenated coding for low date rate space communications.
NASA Technical Reports Server (NTRS)
Chen, C. H.
1972-01-01
In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.
A four-dimensional virtual hand brain-machine interface using active dimension selection.
Rouse, Adam G
2016-06-01
Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Electromagnetic Energy Localization and Characterization of Composites
2013-01-01
polyhedrons ), and [39] (spheres and a complex yet symmetric structure). With time-domain EM analysis, regular shapes, such as cubes, spheres, and regular...spheres), [40] (spheres, crosses, cylinders, and polyhedrons ), and [41] (spheres and cylinders); and 3-D random mixtures using a frequency-domain finite...element method [42] ( polyhedrons ), and [43], [44] (spheres). Such steady-state analyses are limited as they, for example, do not capture temporal
NASA Astrophysics Data System (ADS)
Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.
2016-02-01
Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
Sachs, Nicholas A; Ruiz-Torres, Ricardo; Perreault, Eric J; Miller, Lee E
2016-02-01
It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.
Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems
Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.
2011-01-01
The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582
Self-assembled bionanostructures: proteins following the lead of DNA nanostructures
2014-01-01
Natural polymers are able to self-assemble into versatile nanostructures based on the information encoded into their primary structure. The structural richness of biopolymer-based nanostructures depends on the information content of building blocks and the available biological machinery to assemble and decode polymers with a defined sequence. Natural polypeptides comprise 20 amino acids with very different properties in comparison to only 4 structurally similar nucleotides, building elements of nucleic acids. Nevertheless the ease of synthesizing polynucleotides with selected sequence and the ability to encode the nanostructural assembly based on the two specific nucleotide pairs underlay the development of techniques to self-assemble almost any selected three-dimensional nanostructure from polynucleotides. Despite more complex design rules, peptides were successfully used to assemble symmetric nanostructures, such as fibrils and spheres. While earlier designed protein-based nanostructures used linked natural oligomerizing domains, recent design of new oligomerizing interaction surfaces and introduction of the platform for topologically designed protein fold may enable polypeptide-based design to follow the track of DNA nanostructures. The advantages of protein-based nanostructures, such as the functional versatility and cost effective and sustainable production methods provide strong incentive for further development in this direction. PMID:24491139
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits
NASA Astrophysics Data System (ADS)
Haghparast, Majid; Monfared, Asma Taheri
2017-05-01
Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
Ensemble cryo-EM elucidates the mechanism of translation fidelity
Loveland, Anna B.; Demo, Gabriel; Grigorieff, Nikolaus; Korostelev, Andrei A.
2017-01-01
SUMMARY Faithful gene translation depends on accurate decoding, whose structural mechanism remains a matter of debate. Ribosomes decode mRNA codons by selecting cognate aminoacyl-tRNAs delivered by EF-Tu. We present high-resolution structural ensembles of ribosomes with cognate or near-cognate aminoacyl-tRNAs delivered by EF-Tu. Both cognate and near-cognate tRNA anticodons explore the A site of an open 30S subunit, while inactive EF-Tu is separated from the 50S subunit. A transient conformation of decoding-center nucleotide G530 stabilizes the cognate codon-anticodon helix, initiating step-wise “latching” of the decoding center. The resulting 30S domain closure docks EF-Tu at the sarcin-ricin loop of the 50S subunit, activating EF-Tu for GTP hydrolysis and ensuing aminoacyl-tRNA accommodation. By contrast, near-cognate complexes fail to induce the G530 latch, thus favoring open 30S pre-accommodation intermediates with inactive EF-Tu. This work unveils long-sought structural differences between the pre-accommodation of cognate and near-cognate tRNA that elucidate the mechanism of accurate decoding. PMID:28538735
High performance MPEG-audio decoder IC
NASA Technical Reports Server (NTRS)
Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.
1993-01-01
The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
NASA Astrophysics Data System (ADS)
Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.
2016-12-01
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.
Complex Waves on 1D, 2D, and 3D Periodic Arrays of Lossy and Lossless Magnetodielectric Spheres
2010-05-16
magnetic) dipole field. The radius of the spheres is denoted by a, and t he relative permittivity and permeability of the spheres are denoted by fr and...1-’" respectively, where fr and I-’r are in general complex. We denote the separation between the centers of adjacent spheres by d, take the z axis...fT and JJ.’i-ff become reciprocals) , both (~fT and J.’’i- fr should approach the value of + 1. However, a little t.hought and numerical examples
A four-dimensional virtual hand brain-machine interface using active dimension selection
NASA Astrophysics Data System (ADS)
Rouse, Adam G.
2016-06-01
Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
A four-dimensional virtual hand brain-machine interface using active dimension selection
Rouse, Adam G.
2018-01-01
Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896
NASA Astrophysics Data System (ADS)
Jo, Hyunho; Sim, Donggyu
2014-06-01
We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Zhao, Yuanyuan; Fan, Haimei; Li, Wen; Bi, Lihua; Wang, Dejun; Wu, Lixin
2010-09-21
In this paper, we demonstrated a new convenient route for in situ fabrication of well separated small sized WO(3) nanoparticles in silica spheres, through a predeposition of surfactant encapsulated polyoxotungates as tungsten source, and followed by a calcination process. In a typical procedure, selected polyoxotungates with different charges were enwrapped with dioctadecyldimethylammonium cations through electrostatic interaction. Elemental analysis, thermogravimetric analysis, and spectral characterization confirmed the formation of prepared complexes with the anticipated chemical structure. The complexes were then phase-transferred into aqueous solution that predissolved surfactant cetyltrimethylammonium bromide, and finally incorporated into silica spheres through a joint sol-gel reaction with tetraethyl orthosilicate in a well dispersed state under the protection of organic layer for polyoxotungates from the alkaline reaction condition. Transmission electron microscopic images illustrated the well dispersed WO(3) nanoparticles in the size range of ca. 2.2 nm in the silica spheres after the calcination at 465 °C. The sizes of both the silica spheres and WO(3) nanoparticles could be adjusted independently through changing the doping content to a large extent. Meanwhile, the doped polyoxotungate complexes acted as the template for the mesoporous structure in silica spheres after the calcination. Along with the increase of doping content and surfactant, the mesopore size changed little (2.0-2.9 nm), but the specific surface areas increased quite a lot. Importantly, the WO(3)-nanoparticle-doped silica spheres displayed an interesting photovoltaic property, which is favorable for the funtionalization of these nanomaterials.
Sheng, Guodong; Yang, Shitong; Sheng, Jiang; Hu, Jun; Tan, Xiaoli; Wang, Xiangke
2011-09-15
Sequestration of Ni(II) on diatomite as a function of time, pH, and temperature was investigated by batch, XPS, and EXAFS techniques. The ionic strength-dependent sorption at pH < 7.0 was consistent with outer-sphere surface complexation, while the ionic strength-independent sorption at pH = 7.0-8.6 was indicative of inner-sphere surface complexation. EXAFS results indicated that the adsorbed Ni(II) consisted of ∼6 O at R(Ni-O) ≈ 2.05 Å. EXAFS analysis from the second shell suggested that three phenomena occurred at the diatomite/water interface: (1) outer-sphere and/or inner-sphere complexation; (2) dissolution of Si which is the rate limiting step during Ni uptake; and (3) extensive growth of surface (co)precipitates. Under acidic conditions, outer-sphere complexation is the main mechanism controlling Ni uptake, which is in good agreement with the macroscopic results. At contact time of 1 h or 1 day or pH = 7.0-8.0, surface coprecipitates occur concurrently with inner-sphere complexes on diatomite surface, whereas at contact time of 1 month or pH = 10.0, surface (co)precipitates dominate Ni uptake. Furthermore, surface loading increases with temperature increasing, and surface coprecipitates become the dominant mechanism at elevated temperature. The results are important to understand Ni interaction with minerals at the solid-water interface, which is helpful to evaluate the mobility of Ni(II) in the natural environment.
Nie, Zhe; Finck, Nicolas; Heberling, Frank; Pruessmann, Tim; Liu, Chunli; Lützenkirchen, Johannes
2017-04-04
Knowledge of the geochemical behavior of selenium and strontium is critical for the safe disposal of radioactive wastes. Goethite, as one of the most thermodynamically stable and commonly occurring natural iron oxy-hydroxides, promisingly retains these elements. This work comprehensively studies the adsorption of Se(IV) and Sr(II) on goethite. Starting from electrokinetic measurements, the binary and ternary adsorption systems are investigated and systematically compared via batch experiments, EXAFS analysis, and CD-MUSIC modeling. Se(IV) forms bidentate inner-sphere surface complexes, while Sr(II) is assumed to form outer-sphere complexes at low and intermediate pH and inner-sphere complexes at high pH. Instead of a direct interaction between Se(IV) and Sr(II), our results indicate an electrostatically driven mutual enhancement of adsorption. Adsorption of Sr(II) is promoted by an average factor of 5 within the typical groundwater pH range from 6 to 8 for the concentration range studied here. However, the interaction between Se(IV) and Sr(II) at the surface is two-sided, Se(IV) promotes Sr(II) outer-sphere adsorption, but competes for inner-sphere adsorption sites at high pH. The complexity of surfaces is highlighted by the inability of adsorption models to predict isoelectric points without additional constraints.
Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices.
Schaffelhofer, Stefan; Agudelo-Toro, Andres; Scherberger, Hansjörg
2015-01-21
Despite recent advances in decoding cortical activity for motor control, the development of hand prosthetics remains a major challenge. To reduce the complexity of such applications, higher cortical areas that also represent motor plans rather than just the individual movements might be advantageous. We investigated the decoding of many grip types using spiking activity from the anterior intraparietal (AIP), ventral premotor (F5), and primary motor (M1) cortices. Two rhesus monkeys were trained to grasp 50 objects in a delayed task while hand kinematics and spiking activity from six implanted electrode arrays (total of 192 electrodes) were recorded. Offline, we determined 20 grip types from the kinematic data and decoded these hand configurations and the grasped objects with a simple Bayesian classifier. When decoding from AIP, F5, and M1 combined, the mean accuracy was 50% (using planning activity) and 62% (during motor execution) for predicting the 50 objects (chance level, 2%) and substantially larger when predicting the 20 grip types (planning, 74%; execution, 86%; chance level, 5%). When decoding from individual arrays, objects and grip types could be predicted well during movement planning from AIP (medial array) and F5 (lateral array), whereas M1 predictions were poor. In contrast, predictions during movement execution were best from M1, whereas F5 performed only slightly worse. These results demonstrate for the first time that a large number of grip types can be decoded from higher cortical areas during movement preparation and execution, which could be relevant for future neuroprosthetic devices that decode motor plans. Copyright © 2015 the authors 0270-6474/15/351068-14$15.00/0.
NASA Astrophysics Data System (ADS)
Kashiwabara, Teruhiko; Takahashi, Yoshio; Marcus, Matthew A.; Uruga, Tomoya; Tanida, Hajime; Terada, Yasuko; Usui, Akira
2013-04-01
The tungsten (W) species in marine ferromanganese oxides were investigated by wavelength dispersive XAFS method. We found that the W species are in distorted Oh symmetry in natural ferromanganese oxides. The host phase of W is suggested to be Mn oxides by μ-XRF mapping. We also found that the W species forms inner-sphere complexes in hexavalent state and distorted Oh symmetry on synthetic ferrihydrite, goethite, hematite, and δ-MnO2. The molecular-scale information of W indicates that the negatively-charged WO42- ion mainly adsorbs on the negatively-charged Mn oxides phase in natural ferromanganese oxides due to the strong chemical interaction. In addition, preferential adsorption of lighter W isotopes is expected based on the molecular symmetry of the adsorbed species, implying the potential significance of the W isotope systems similar to Mo. Adsorption experiments of W on synthetic ferrihydrite and δ-MnO2 were also conducted. At higher equilibrium concentration, W exhibits behaviors similar to Mo on δ-MnO2 due to their formations of inner-sphere complexes. On the other hand, W shows a much larger adsorption on ferrihydrite than Mo. This is due to the formation of the inner- and outer-sphere complexes for W and Mo on ferrihydrite, respectively. Considering the lower equilibrium concentration such as in oxic seawater, however, the enrichment of W into natural ferromanganese oxides larger than Mo may be controlled by the different stabilities of their inner-sphere complexes on the Mn oxides. These two factors, (i) the stability of inner-sphere complexes on the Mn oxides and (ii) the mode of attachment on ferrihydrite (inner- or outer-sphere complex), are the causes of the different behaviors of W and Mo on the surface of the Fe/Mn (oxyhydr)oxides.
De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia
2017-11-13
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.
On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Xu, Lei; Wang, Ting
2010-12-01
Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
More than meets the eye: the role of self-identity in decoding complex emotional states.
Stevenson, Michael T; Soto, José A; Adams, Reginald B
2012-10-01
Folk wisdom asserts that "the eyes are the window to the soul," and empirical science corroborates a prominent role for the eyes in the communication of emotion. Herein we examine variation in the ability to "read" the eyes of others as a function of social group membership, employing a widely used emotional state decoding task: "Reading the Mind in Eyes." This task has documented impaired emotional state decoding across racial groups, with cross-race performance on par with that previously reported as a function of autism spectrum disorders. The present study extended this work by examining the moderating role of social identity in such impairments. For college students more highly identified with their university, cross-race performance differences were not found for judgments of "same-school" eyes but remained for "rival-school" eyes. These findings suggest that impaired emotional state decoding across groups may thus be more amenable to remediation than previously realized.
NASA Astrophysics Data System (ADS)
Zhu, Yi-Jun; Liang, Wang-Feng; Wang, Chao; Wang, Wen-Ya
2017-01-01
In this paper, space-collaborative constellations (SCCs) for indoor multiple-input multiple-output (MIMO) visible light communication (VLC) systems are considered. Compared with traditional VLC MIMO techniques, such as repetition coding (RC), spatial modulation (SM) and spatial multiplexing (SMP), SCC achieves the minimum average optical power for a fixed minimum Euclidean distance. We have presented a unified SCC structure for 2×2 MIMO VLC systems and extended it to larger MIMO VLC systems with more transceivers. Specifically for 2×2 MIMO VLC, a fast decoding algorithm is developed with decoding complexity almost linear in terms of the square root of the cardinality of SCC, and the expressions of symbol error rate of SCC are presented. In addition, bit mappings similar to Gray mapping are proposed for SCC. Computer simulations are performed to verify the fast decoding algorithm and the performance of SCC, and the results demonstrate that the performance of SCC is better than those of RC, SM and SMP for indoor channels in general.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
NASA Technical Reports Server (NTRS)
Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.
1996-01-01
The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.
Outer-sphere Pb(II) adsorbed at specific surface sites on single crystal α-alumina
Bargar, John R.; Towle, Steven N.; Brown, Gordon E.; Parks, George A.
1996-01-01
Solvated Pb(II) ions were found to adsorb as structurally well-defined outer-sphere complexes at specific sites on the α-Al2O3 (0001) single crystal surface, as determined by grazing-incidence X-ray absorption fine structure (GI-XAFS) measurements. The XAFS results suggest that the distance between Pb(II) adions and the alumina surface is approximately 4.2 Å. In contrast, Pb(II) adsorbs as more strongly bound inner-sphere complexes on α-Al2O3 (102). The difference in reactivities of the two alumina surfaces has implications for modeling surface complexation reactions of contaminants in natural environments, catalysis, and compositional sector zoning of oxide crystals.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex
Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo
2015-01-01
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne; Colé, Pascale
2013-01-01
Background The literature suggests that a complex relationship exists between the three main skills involved in reading comprehension (decoding, listening comprehension and vocabulary) and that this relationship depends on at least three other factors orthographic transparency, children’s grade level and socioeconomic status (SES). This study investigated the relative contribution of the predictors of reading comprehension in a longitudinal design (from beginning to end of the first grade) in 394 French children from low SES families. Methodology/Principal findings Reading comprehension was measured at the end of the first grade using two tasks one with short utterances and one with a medium length narrative text. Accuracy in listening comprehension and vocabulary, and fluency of decoding skills, were measured at the beginning and end of the first grade. Accuracy in decoding skills was measured only at the beginning. Regression analyses showed that listening comprehension and decoding skills (accuracy and fluency) always significantly predicted reading comprehension. The contribution of decoding was greater when reading comprehension was assessed via the task using short utterances. Between the two assessments, the contribution of vocabulary, and of decoding skills especially, increased, while that of listening comprehension remained unchanged. Conclusion/Significance These results challenge the ‘simple view of reading’. They also have educational implications, since they show that it is possible to assess decoding and reading comprehension very early on in an orthography (i.e., French), which is less deep than the English one even in low SES children. These assessments, associated with those of listening comprehension and vocabulary, may allow early identification of children at risk for reading difficulty, and to set up early remedial training, which is the most effective, for them. PMID:24250802
Hammer, Jiri; Fischer, Jörg; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio
2013-01-01
In neuronal population signals, including the electroencephalogram (EEG) and electrocorticogram (ECoG), the low-frequency component (LFC) is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI) applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold) and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy) “copy” of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative LFC of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance. PMID:24198757
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Music models aberrant rule decoding and reward valuation in dementia
Clark, Camilla N; Golden, Hannah L; McCallion, Oliver; Nicholas, Jennifer M; Cohen, Miriam H; Slattery, Catherine F; Paterson, Ross W; Fletcher, Phillip D; Mummery, Catherine J; Rohrer, Jonathan D; Crutch, Sebastian J; Warren, Jason D
2018-01-01
Abstract Aberrant rule- and reward-based processes underpin abnormalities of socio-emotional behaviour in major dementias. However, these processes remain poorly characterized. Here we used music to probe rule decoding and reward valuation in patients with frontotemporal dementia (FTD) syndromes and Alzheimer’s disease (AD) relative to healthy age-matched individuals. We created short melodies that were either harmonically resolved (‘finished’) or unresolved (‘unfinished’); the task was to classify each melody as finished or unfinished (rule processing) and rate its subjective pleasantness (reward valuation). Results were adjusted for elementary pitch and executive processing; neuroanatomical correlates were assessed using voxel-based morphometry. Relative to healthy older controls, patients with behavioural variant FTD showed impairments of both musical rule decoding and reward valuation, while patients with semantic dementia showed impaired reward valuation but intact rule decoding, patients with AD showed impaired rule decoding but intact reward valuation and patients with progressive non-fluent aphasia performed comparably to healthy controls. Grey matter associations with task performance were identified in anterior temporal, medial and lateral orbitofrontal cortices, previously implicated in computing diverse biological and non-biological rules and rewards. The processing of musical rules and reward distils cognitive and neuroanatomical mechanisms relevant to complex socio-emotional dysfunction in major dementias. PMID:29186630
Coarse-Scale Biases for Spirals and Orientation in Human Visual Cortex
Heeger, David J.
2013-01-01
Multivariate decoding analyses are widely applied to functional magnetic resonance imaging (fMRI) data, but there is controversy over their interpretation. Orientation decoding in primary visual cortex (V1) reflects coarse-scale biases, including an over-representation of radial orientations. But fMRI responses to clockwise and counter-clockwise spirals can also be decoded. Because these stimuli are matched for radial orientation, while differing in local orientation, it has been argued that fine-scale columnar selectivity for orientation contributes to orientation decoding. We measured fMRI responses in human V1 to both oriented gratings and spirals. Responses to oriented gratings exhibited a complex topography, including a radial bias that was most pronounced in the peripheral representation, and a near-vertical bias that was most pronounced near the foveal representation. Responses to clockwise and counter-clockwise spirals also exhibited coarse-scale organization, at the scale of entire visual quadrants. The preference of each voxel for clockwise or counter-clockwise spirals was predicted from the preferences of that voxel for orientation and spatial position (i.e., within the retinotopic map). Our results demonstrate a bias for local stimulus orientation that has a coarse spatial scale, is robust across stimulus classes (spirals and gratings), and suffices to explain decoding from fMRI responses in V1. PMID:24336733
Measurement of the Casimir Force between Two Spheres
NASA Astrophysics Data System (ADS)
Garrett, Joseph L.; Somers, David A. T.; Munday, Jeremy N.
2018-01-01
Complex interaction geometries offer a unique opportunity to modify the strength and sign of the Casimir force. However, measurements have traditionally been limited to sphere-plate or plate-plate configurations. Prior attempts to extend measurements to different geometries relied on either nanofabrication techniques that are limited to only a few materials or slight modifications of the sphere-plate geometry due to alignment difficulties of more intricate configurations. Here, we overcome this obstacle to present measurements of the Casimir force between two gold spheres using an atomic force microscope. Force measurements are alternated with topographical scans in the x -y plane to maintain alignment of the two spheres to within approximately 400 nm (˜1 % of the sphere radii). Our experimental results are consistent with Lifshitz's theory using the proximity force approximation (PFA), and corrections to the PFA are bounded using nine sphere-sphere and three sphere-plate measurements with spheres of varying radii.
Leveled Reading and Engagement with Complex Texts
ERIC Educational Resources Information Center
Hastings, Kathryn
2016-01-01
The benefits of engaging with age-appropriate reading materials in classroom settings are numerous. For example, students' comprehension is developed as they acquire new vocabulary and concepts. The Common Core requires all students have daily opportunities to engage with "complex text" regardless of students' decoding levels. However,…
Accelerating a MPEG-4 video decoder through custom software/hardware co-design
NASA Astrophysics Data System (ADS)
Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio
2007-05-01
In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Facet-Dependent Cr(VI) Adsorption of Hematite Nanocrystals.
Huang, Xiaopeng; Hou, Xiaojing; Song, Fahui; Zhao, Jincai; Zhang, Lizhi
2016-02-16
In this study, the adsorption process of Cr(VI) on the hematite facets was systematically investigated with synchrotron-based Cr K-edge extended X-ray absorption fine structure (EXAFS) spectroscopy, in situ attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy, density-functional theory calculation, and surface complexation models. Structural model fitting of EXAFS spectroscopy suggested that the interatomic distances of Cr-Fe were, respectively, 3.61 Å for the chromate coordinated hematite nanoplates with exposed {001} facets, 3.60 and 3.30 Å for the chromate coordinated hematite nanorods with exposed {001} and {110} facets, which were characteristic of inner-sphere complexation. In situ ATR-FTIR spectroscopy analysis confirmed the presence of two inner-sphere surface complexes with C3ν and C2ν symmetry, while the C3ν and C2ν species were assigned to monodentate and bidentate inner-sphere surface complexes with average Cr-Fe interatomic distances of 3.60 and 3.30 Å, respectively. On the basis of these experimental and theoretical results, we concluded that HCrO4(-) as dominated Cr(VI) species was adsorbed on {001} and {110} facets in inner-sphere monodentate mononuclear and bidentate binuclear configurations, respectively. Moreover, the Cr(VI) adsorption performance of hematite facets was strongly dependent on the chromate complexes formed on the hematite facets.
Complex Chern-Simons from M5-branes on the squashed three-sphere
NASA Astrophysics Data System (ADS)
Córdova, Clay; Jafferis, Daniel L.
2017-11-01
We derive an equivalence between the (2,0) superconformal M5-brane field theory dimensionally reduced on a squashed three-sphere, and Chern-Simons theory with complex gauge group. In the reduction, the massless fermions obtain an action which is second order in derivatives and are reinterpreted as ghosts for gauge fixing the emergent non-compact gauge symmetry. A squashing parameter in the geometry controls the imaginary part of the complex Chern-Simons level.
Social Justice and Education in the Public and Private Spheres
ERIC Educational Resources Information Center
Power, Sally; Taylor, Chris
2013-01-01
This paper explores the complex relationship between social justice and education in the public and private spheres. The politics of education is often presented as a battle between left and right, the state and the market. In this representation, the public and the private spheres are neatly aligned on either side of the line of battle, and…
Spatial domain entertainment audio decompression/compression
NASA Astrophysics Data System (ADS)
Chan, Y. K.; Tam, Ka Him K.
2014-02-01
The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.
Automatic detection and decoding of honey bee waggle dances.
Wario, Fernando; Wild, Benjamin; Rojas, Raúl; Landgraf, Tim
2017-01-01
The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.
Delgutte, Bertrand
2015-01-01
At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Revathy, M.; Saravanan, R.
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017
Callan, Daniel E; Terzibas, Cengiz; Cassel, Daniel B; Sato, Masa-Aki; Parasuraman, Raja
2016-01-01
The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0-352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity related to piloting the airplane.
Callan, Daniel E.; Terzibas, Cengiz; Cassel, Daniel B.; Sato, Masa-aki; Parasuraman, Raja
2016-01-01
The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0–352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity related to piloting the airplane. PMID:27199710
Automatic detection and decoding of honey bee waggle dances
Wild, Benjamin; Rojas, Raúl; Landgraf, Tim
2017-01-01
The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer’s movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system’s performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance. PMID:29236712
Face processing in chronic alcoholism: a specific deficit for emotional features.
Maurage, P; Campanella, S; Philippot, P; Martin, S; de Timary, P
2008-04-01
It is well established that chronic alcoholism is associated with a deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specifically for emotions or due to a more general impairment in visual or facial processing. This study was designed to clarify this issue using multiple control tasks and the subtraction method. Eighteen patients suffering from chronic alcoholism and 18 matched healthy control subjects were asked to perform several tasks evaluating (1) Basic visuo-spatial and facial identity processing; (2) Simple reaction times; (3) Complex facial features identification (namely age, emotion, gender, and race). Accuracy and reaction times were recorded. Alcoholic patients had a preserved performance for visuo-spatial and facial identity processing, but their performance was impaired for visuo-motor abilities and for the detection of complex facial aspects. More importantly, the subtraction method showed that alcoholism is associated with a specific EFE decoding deficit, still present when visuo-motor slowing down is controlled for. These results offer a post hoc confirmation of earlier data showing an EFE decoding deficit in alcoholism by strongly suggesting a specificity of this deficit for emotions. This may have implications for clinical situations, where emotional impairments are frequently observed among alcoholic subjects.
Mapping of MPEG-4 decoding on a flexible architecture platform
NASA Astrophysics Data System (ADS)
van der Tol, Erik B.; Jaspers, Egbert G.
2001-12-01
In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.
Burkle, Frederick M
2018-02-01
Triage management remains a major challenge, especially in resource-poor settings such as war, complex humanitarian emergencies, and public health emergencies in developing countries. In triage it is often the disruption of physiology, not anatomy, that is critical, supporting triage methodology based on clinician-assessed physiological parameters as well as anatomy and mechanism of injury. In recent times, too many clinicians from developed countries have deployed to humanitarian emergencies without the physical exam skills needed to assess patients without the benefit of remotely fed electronic monitoring, laboratory, and imaging studies. In triage, inclusion of the once-widely accepted and collectively taught "art of decoding vital signs" with attention to their character and meaning may provide clues to a patient's physiological state, improving triage sensitivity. Attention to decoding vital signs is not a triage methodology of its own or a scoring system, but rather a skill set that supports existing triage methodologies. With unique triage management challenges being raised by an ever-changing variety of humanitarian crises, these once useful skill sets need to be revisited, understood, taught, and utilized by triage planners, triage officers, and teams as a necessary adjunct to physiologically based triage decision-making. (Disaster Med Public Health Preparedness. 2018;12:76-85).
Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.
2016-01-01
Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538
NASA Astrophysics Data System (ADS)
Miller, Robert E. (Robin)
2005-04-01
In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Steric hindrance and the enhanced stability of light rare-earth elements in hydrothermal fluids
Mayanovic, Robert A.; Anderson, Alan J.; Bassett, William A.; Chou, I.-Ming
2009-01-01
A series of X-ray absorption spectroscopy (XAS) experiments were made to determine the structure and stability of aqueous REE (La, Nd, Gd, and Yb) chloride complexes to 500 ??C and 520 MPa. The REE3+ ions exhibit inner-sphere chloroaqua complexation with a steady increase of chloride coordination with increasing temperature in the 150 to 500 ??C range. Furthermore, the degree of chloride coordination of REE3+ inner-sphere chloroaqua complexes decreases significantly from light to heavy REE. These results indicate that steric hindrance drives the reduction of chloride coordination of REE3+ inner-sphere chloroaqua complexes from light to heavy REE. This results in greater stability and preferential transport of light REE3+ over heavy REE3+ ions in saline hydrothermal fluids. Accordingly, the preferential mobility of light REE directly influences the relative abundance of REE in rocks and minerals and thus needs to be considered in geochemical modeling of petrogenetic and ore-forming processes affected by chloride-bearing hydrothermal fluids.
Müller, Katharina; Gröschel, Annett; Rossberg, André; Bok, Frank; Franzen, Carola; Brendler, Vinzenz; Foerstendorf, Harald
2015-02-17
Hematite plays a decisive role in regulating the mobility of contaminants in rocks and soils. The Np(V) reactions at the hematite-water interface were comprehensively investigated by a combined approach of in situ vibrational spectroscopy, X-ray absorption spectroscopy and surface complexation modeling. A variety of sorption parameters such as Np(V) concentration, pH, ionic strength, and the presence of bicarbonate was considered. Time-resolved IR spectroscopic sorption experiments at the iron oxide-water interface evidenced the formation of a single monomer Np(V) inner-sphere sorption complex. EXAFS provided complementary information on bidentate edge-sharing coordination. In the presence of atmospherically derived bicarbonate the formation of the bis-carbonato inner-sphere complex was confirmed supporting previous EXAFS findings.1 The obtained molecular structure allows more reliable surface complexation modeling of recent and future macroscopic data. Such confident modeling is mandatory for evaluating water contamination and for predicting the fate and migration of radioactive contaminants in the subsurface environment as it might occur in the vicinity of a radioactive waste repository or a reprocessing plant.
Monte Carlo simulation of Hamaker nanospheres coated with dipolar particles
NASA Astrophysics Data System (ADS)
Meyra, Ariel G.; Zarragoicoechea, Guillermo J.; Kuz, Victor A.
2012-01-01
Parallel tempering Monte Carlo simulation is carried out in systems of N attractive Hamaker spheres dressed with n dipolar particles, able to move on the surface of the spheres. Different cluster configurations emerge for given values of the control parameters. Energy per sphere, pair distribution functions of spheres and dipoles as function of temperature, density, external electric field, and/or the angular orientation of dipoles are used to analyse the state of aggregation of the system. As a consequence of the non-central interaction, the model predicts complex structures like self-assembly of spheres by a double crown of dipoles. This interesting result could be of help in understanding some recent experiments in colloidal science and biology.
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.
Raz, Gal; Svanera, Michele; Singer, Neomi; Gilam, Gadi; Cohen, Maya Bleich; Lin, Tamar; Admon, Roee; Gonen, Tal; Thaler, Avner; Granot, Roni Y; Goebel, Rainer; Benini, Sergio; Valente, Giancarlo
2017-12-01
Major methodological advancements have been recently made in the field of neural decoding, which is concerned with the reconstruction of mental content from neuroimaging measures. However, in the absence of a large-scale examination of the validity of the decoding models across subjects and content, the extent to which these models can be generalized is not clear. This study addresses the challenge of producing generalizable decoding models, which allow the reconstruction of perceived audiovisual features from human magnetic resonance imaging (fMRI) data without prior training of the algorithm on the decoded content. We applied an adapted version of kernel ridge regression combined with temporal optimization on data acquired during film viewing (234 runs) to generate standardized brain models for sound loudness, speech presence, perceived motion, face-to-frame ratio, lightness, and color brightness. The prediction accuracies were tested on data collected from different subjects watching other movies mainly in another scanner. Substantial and significant (Q FDR <0.05) correlations between the reconstructed and the original descriptors were found for the first three features (loudness, speech, and motion) in all of the 9 test movies (R¯=0.62, R¯ = 0.60, R¯ = 0.60, respectively) with high reproducibility of the predictors across subjects. The face ratio model produced significant correlations in 7 out of 8 movies (R¯=0.56). The lightness and brightness models did not show robustness (R¯=0.23, R¯ = 0). Further analysis of additional data (95 runs) indicated that loudness reconstruction veridicality can consistently reveal relevant group differences in musical experience. The findings point to the validity and generalizability of our loudness, speech, motion, and face ratio models for complex cinematic stimuli (as well as for music in the case of loudness). While future research should further validate these models using controlled stimuli and explore the feasibility of extracting more complex models via this method, the reliability of our results indicates the potential usefulness of the approach and the resulting models in basic scientific and diagnostic contexts. Copyright © 2017 Elsevier Inc. All rights reserved.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Menger, Marcus; Eckstein, Fritz; Porschke, Dietmar
2000-01-01
The dynamics of a hammerhead ribozyme was analyzed by measurements of fluorescence-detected temperature jump relaxation. The ribozyme was substituted at different positions by 2-aminopurine (2-AP) as fluorescence indicator; these substitutions do not inhibit catalysis. The general shape of relaxation curves reported from different positions of the ribozyme is very similar: a fast decrease of fluorescence, mainly due to physical quenching, is followed by a slower increase of fluorescence due to conformational relaxation. In most cases at least three relaxation time constants in the time range from a few microseconds to ~200 ms are required for fitting. Although the relaxation at different positions of the ribozyme is similar in general, suggesting a global type of ribozyme dynamics, a close examination reveals differences, indicating an individual local response. For example, 2-AP in a tetraloop reports mainly the local loop dynamics known from isolated loops, whereas 2-AP located at the core, e.g. at the cleavage site or its vicinity, also reports relatively large amplitudes of slower components of the ribozyme dynamics. A variant with an A→G substitution in domain II, resulting in an inactive form, leads to the appearance of a particularly slow relaxation process (τ ≈200 ms). Addition of Mg2+ ions induces a reduction of amplitudes and in most cases a general increase of time constants. Differences between the hammerhead variants are clearly demonstrated by subtraction of relaxation curves recorded under corresponding conditions. The changes induced in the relaxation response by Mg2+ are very similar to those induced by Ca2+. The relaxation data do not provide any evidence for formation of Mg2+-inner sphere complexes in hammerhead ribozymes, because a Mg2+-specific relaxation effect was not visible. However, a Mg2+-specific effect was found for a dodeca-riboadenylate substituted with 2-AP, showing that the fluorescence of 2-AP is able to indicate inner sphere complexation. Amplitudes and time constants show that the equilibrium constant of inner sphere complexation is 1.2, corresponding to 55% inner sphere state of the Mg2+ complexes; the rate constant 6.6 × 103 s–1 for inner sphere complexation is relatively low and shows the existence of some barrier(s) on the way to inner sphere complexes. PMID:11071929
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
Kusano, Toshiki; Kurashige, Hiroki; Nambu, Isao; Moriguchi, Yoshiya; Hanakawa, Takashi; Wada, Yasuhiro; Osu, Rieko
2015-08-01
It has been suggested that resting-state brain activity reflects task-induced brain activity patterns. In this study, we examined whether neural representations of specific movements can be observed in the resting-state brain activity patterns of motor areas. First, we defined two regions of interest (ROIs) to examine brain activity associated with two different behavioral tasks. Using multi-voxel pattern analysis with regularized logistic regression, we designed a decoder to detect voxel-level neural representations corresponding to the tasks in each ROI. Next, we applied the decoder to resting-state brain activity. We found that the decoder discriminated resting-state neural activity with accuracy comparable to that associated with task-induced neural activity. The distribution of learned weighted parameters for each ROI was similar for resting-state and task-induced activities. Large weighted parameters were mainly located on conjunctive areas. Moreover, the accuracy of detection was higher than that for a decoder whose weights were randomly shuffled, indicating that the resting-state brain activity includes multi-voxel patterns similar to the neural representation for the tasks. Therefore, these results suggest that the neural representation of resting-state brain activity is more finely organized and more complex than conventionally considered.
Unified Theory for Decoding the Signals from X-Ray Florescence and X-Ray Diffraction of Mixtures.
Chung, Frank H
2017-05-01
For research and development or for solving technical problems, we often need to know the chemical composition of an unknown mixture, which is coded and stored in the signals of its X-ray fluorescence (XRF) and X-ray diffraction (XRD). X-ray fluorescence gives chemical elements, whereas XRD gives chemical compounds. The major problem in XRF and XRD analyses is the complex matrix effect. The conventional technique to deal with the matrix effect is to construct empirical calibration lines with standards for each element or compound sought, which is tedious and time-consuming. A unified theory of quantitative XRF analysis is presented here. The idea is to cancel the matrix effect mathematically. It turns out that the decoding equation for quantitative XRF analysis is identical to that for quantitative XRD analysis although the physics of XRD and XRF are fundamentally different. The XRD work has been published and practiced worldwide. The unified theory derives a new intensity-concentration equation of XRF, which is free from the matrix effect and valid for a wide range of concentrations. The linear decoding equation establishes a constant slope for each element sought, hence eliminating the work on calibration lines. The simple linear decoding equation has been verified by 18 experiments.
Six-coordinate manganese(3+) in catalysis by yeast manganese superoxide dismutase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Yuewei; Gralla, Edith Butler; Schumacher, Mikhail
Reduction of superoxide (O{sub 2}{sup -}) by manganese-containing superoxide dismutase occurs through either a 'prompt protonation' pathway, or an 'inner-sphere' pathway, with the latter leading to formation of an observable Mn-peroxo complex. We recently reported that wild-type (WT) manganese superoxide dismutases (MnSODs) from Saccharomyces cerevisiae and Candida albicans are more gated toward the 'prompt protonation' pathway than human and bacterial MnSODs and suggested that this could result from small structural changes in the second coordination sphere of manganese. We report here that substitution of a second-sphere residue, Tyr34, by phenylalanine (Y34F) causes the MnSOD from S. cerevisiae to react exclusivelymore » through the 'inner-sphere' pathway. At neutral pH, we have a surprising observation that protonation of the Mn-peroxo complex in the mutant yeast enzyme occurs through a fast pathway, leading to a putative six-coordinate Mn3+ species, which actively oxidizes O{sub 2}{sup -} in the catalytic cycle. Upon increasing pH, the fast pathway is gradually replaced by a slow proton-transfer pathway, leading to the well-characterized five-coordinate Mn{sup 3+}. We here propose and compare two hypothetical mechanisms for the mutant yeast enzyme, diffeeing in the structure of the Mn-peroxo complex yet both involving formation of the active six-coordinate Mn{sup 3+} and proton transfer from a second-sphere water molecule, which has substituted for the -OH of Tyr34, to the Mn-peroxo complex. Because WT and the mutant yeast MnSOD both rest in the 2+ state and become six-coordinate when oxidized up from Mn{sup 2+}, six-coordinate Mn{sup 3+} species could also actively function in the mechanism of WT yeast MnSODs.« less
Amokrane, S; Ayadim, A; Malherbe, J G
2005-11-01
A simple modification of the reference hypernetted chain (RHNC) closure of the multicomponent Ornstein-Zernike equations with bridge functions taken from Rosenfeld's hard-sphere bridge functional is proposed. Its main effect is to remedy the major limitation of the RHNC closure in the case of highly asymmetric mixtures--the wide domain of packing fractions in which it has no solution. The modified closure is also much faster, while being of similar complexity. This is achieved with a limited loss of accuracy, mainly for the contact value of the big sphere correlation functions. Comparison with simulation shows that inside the RHNC no-solution domain, it provides a good description of the structure, while being clearly superior to all the other closures used so far to study highly asymmetric mixtures. The generic nature of this closure and its good accuracy combined with a reduced no-solution domain open up the possibility to study the phase diagram of complex fluids beyond the hard-sphere model.
Syllabus for Weizmann Course: Earth System Science 101
NASA Technical Reports Server (NTRS)
Wiscombe, Warren J.
2011-01-01
This course aims for an understanding of Earth System Science and the interconnection of its various "spheres" (atmosphere, hydrosphere, etc.) by adopting the view that "the microcosm mirrors the macrocosm". We shall study a small set of microcosims, each residing primarily in one sphere, but substantially involving at least one other sphere, in order to illustrate the kinds of coupling that can occur and gain a greater appreciation of the complexity of even the smallest Earth System Science phenomenon.
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1995-01-01
The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
Iterative Demodulation and Decoding of Non-Square QAM
NASA Technical Reports Server (NTRS)
Li, Lifang; Divsalar, Dariush; Dolinar, Samuel
2004-01-01
It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.
NASA Astrophysics Data System (ADS)
Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih
2017-05-01
Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.
Radial Bias Is Not Necessary For Orientation Decoding
Pratte, Michael S.; Sy, Jocelyn L.; Swisher, Jascha D.; Tong, Frank
2015-01-01
Multivariate pattern analysis can be used to decode the orientation of a viewed grating from fMRI signals in early visual areas. Although some studies have reported identifying multiple sources of the orientation information that make decoding possible, a recent study argued that orientation decoding is only possible because of a single source: a coarse-scale retinotopically organized preference for radial orientations. Here we aim to resolve these discrepant findings. We show that there were subtle, but critical, experimental design choices that led to the erroneous conclusion that a radial bias is the only source of orientation information in fMRI signals. In particular, we show that the reliance on a fast temporal-encoding paradigm for spatial mapping can be problematic, as effects of space and time become conflated and lead to distorted estimates of a voxel’s orientation or retinotopic preference. When we implement minor changes to the temporal paradigm or to the visual stimulus itself, by slowing the periodic rotation of the stimulus or by smoothing its contrast-energy profile, we find significant evidence of orientation information that does not originate from radial bias. In an additional block-paradigm experiment where space and time were not conflated, we apply a formal model comparison approach and find that many voxels exhibit more complex tuning properties than predicted by radial bias alone or in combination with other known coarse-scale biases. Our findings support the conclusion that radial bias is not necessary for orientation decoding. In addition, our study highlights potential limitations of using temporal phase-encoded fMRI designs for characterizing voxel tuning properties. PMID:26666900
Huang, Liangfang; Wang, Wenmin; Wei, Xiaoqin; Wei, Haiyan
2015-04-23
The hydrosilylation of unsaturated carbon-heteroatom (C═O, C═N) bonds catalyzed by high-valent rhenium(V)-dioxo complex ReO2I(PPh3)2 (1) were studied computationally to determine the underlying mechanism. Our calculations revealed that the ionic outer-sphere pathway in which the organic substrate attacks the Si center in an η(1)-silane rhenium adduct to prompt the heterolytic cleavage of the Si-H bond is the most energetically favorable process for rhenium(V)-dioxo complex 1 catalyzed hydrosilylation of imines. The activation energy of the turnover-limiting step was calculated to be 22.8 kcal/mol with phenylmethanimine. This value is energetically more favorable than the [2 + 2] addition pathway by as much as 10.0 kcal/mol. Moreover, the ionic outer-sphere pathway competes with the [2 + 2] addition mechanism for rhenium(V)-dioxo complex 1 catalyzing the hydrosilylation of carbonyl compounds. Furthermore, the electron-donating group on the organic substrates would induce a better activity favoring the ionic outer-sphere mechanistic pathway. These findings highlight the unique features of high-valent transition-metal complexes as Lewis acids in activating the Si-H bond and catalyzing the reduction reactions.
Steady Shear Viscosities of Two Hard Sphere Colloidal Dispersions
NASA Astrophysics Data System (ADS)
Cheng, Zhengdong; Chaikin, Paul M.; Phan, See-Eng; Russel, William B.; Zhu, Jixiang
1996-03-01
Though hard spheres have the simplest inter-particle potential, the many body hydrodynamic interactions are complex and the rheological properties of dispersions are not fully understood in the concentrated regime. We studied two model systems: colloidal poly-(Methyl Methacrylate) spheres with a grafted layer of poly-(12-hydroxy stearic acid) (PMMA/PHSA) and spherical Silica particles (PST-5, Nissan Chemical Industries, Ltd, Tokyo, Japan). Steady shear viscosities were measured by a Zimm viscometer. The high shear relative viscosity of the dispersions compares well with other hard sphere systems, but the low shear relative viscosity of PMMA/PHSA dispersions is η / η 0 = 50 at φ = 0.5 , higher than η / η 0 = 22 for other hard sphere systems, consistent with recently published data (Phys. Rev. Lett. 75(1995)958). Bare Silica spheres are used to clarify the effect of the grafted layer. With the silica spheres, volume fraction can be determined independent of intrinsic viscosity measurements; also, higher concentrated dispersions can be made.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritsan, N.P.; Usov, O.M.; Shokhirev, N.V.
1986-07-01
The optical and ESR spectra have been examined for complexes of Cu(I) with various radicals, which contain various numbers of Cl/sup -/ ions in the central-atom coordination sphere. The spin-Hamiltonian parameters have been determined for all these radical complexes, and the observed ESR spectra have been compared with those calculated with allowance for second-order effects. The observed values for the isotropic and anisotropic components of the HFI constant from the central ion have been used to estimate the contributions from the 4s and 3d/sup 2//sub z/ orbitals of the copper ion to the unpaired-electron MO. Quantum-chemical calculations have been performedmore » by the INDO method on the electronic structures and geometries of complexes formed by CH/sub 2/OH with Cu(I) for various Cl/sup -/ contents in the coordination sphere. The radical is coordinated by the ..pi.. orbital on the carbon atom, and the stabilities of the radical complexes decrease as the number of Cl/sup -/ ions in the coordination sphere increases. A geometry close to planar for the CuCl/sub 4//sup 3 -/ fragment in a complex containing four Cl/sup -/ ions.« less
NASA Astrophysics Data System (ADS)
Liu, Fei; Xu, Guanghua; Zhang, Qing; Liang, Lin; Liu, Dan
2015-11-01
As one of the Geometrical Product Specifications that are widely applied in industrial manufacturing and measurement, sphericity error can synthetically scale a 3D structure and reflects the machining quality of a spherical workpiece. Following increasing demands in the high motion performance of spherical parts, sphericity error is becoming an indispensable component in the evaluation of form error. However, the evaluation of sphericity error is still considered to be a complex mathematical issue, and the related research studies on the development of available models are lacking. In this paper, an intersecting chord method is first proposed to solve the minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error. This new modelling method leverages chord relationships to replace the characteristic points, thereby significantly reducing the computational complexity and improving the computational efficiency. Using the intersecting chords to generate a virtual centre, the reference sphere in two concentric spheres is simplified as a space intersecting structure. The position of the virtual centre on the space intersecting structure is determined by characteristic chords, which may reduce the deviation between the virtual centre and the centre of the reference sphere. In addition,two experiments are used to verify the effectiveness of the proposed method with real datasets from the Cartesian coordinates. The results indicate that the estimated errors are in perfect agreement with those of the published methods. Meanwhile, the computational efficiency is improved. For the evaluation of the sphericity error, the use of high performance computing is a remarkable change.
Catalytic dimer nanomotors: continuum theory and microscopic dynamics.
Reigh, Shang Yik; Kapral, Raymond
2015-04-28
Synthetic chemically-powered motors with various geometries have potentially new applications involving dynamics on very small scales. Self-generated concentration and fluid flow fields, which depend on geometry, play essential roles in motor dynamics. Sphere-dimer motors, comprising linked catalytic and noncatalytic spheres, display more complex versions of such fields, compared to the often-studied spherical Janus motors. By making use of analytical continuum theory and particle-based simulations we determine the concentration fields, and both the complex structure of the near-field and point-force dipole nature of the far-field behavior of the solvent velocity field that are important for studies of collective motor motion. We derive the dependence of motor velocity on geometric factors such as sphere size and dimer bond length and, thus, show how to construct motors with specific characteristics.
NASA Technical Reports Server (NTRS)
Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.
2012-01-01
A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.
Information hiding techniques for infrared images: exploring the state-of-the art and challenges
NASA Astrophysics Data System (ADS)
Pomponiu, Victor; Cavagnino, Davide; Botta, Marco; Nejati, Hossein
2015-10-01
The proliferation of Infrared technology and imaging systems enables a different perspective to tackle many computer vision problems in defense and security applications. Infrared images are widely used by the law enforcement, Homeland Security and military organizations to achieve a significant advantage or situational awareness, and thus is vital to protect these data against malicious attacks. Concurrently, sophisticated malware are developed which are able to disrupt the security and integrity of these digital media. For instance, illegal distribution and manipulation are possible malicious attacks to the digital objects. In this paper we explore the use of a new layer of defense for the integrity of the infrared images through the aid of information hiding techniques such as watermarking. In this context, we analyze the efficiency of several optimal decoding schemes for the watermark inserted into the Singular Value Decomposition (SVD) domain of the IR images using an additive spread spectrum (SS) embedding framework. In order to use the singular values (SVs) of the IR images with the SS embedding we adopt several restrictions that ensure that the values of the SVs will maintain their statistics. For both the optimal maximum likelihood decoder and sub-optimal decoders we assume that the PDF of SVs can be modeled by the Weibull distribution. Furthermore, we investigate the challenges involved in protecting and assuring the integrity of IR images such as data complexity and the error probability behavior, i.e., the probability of detection and the probability of false detection, for the applied optimal decoders. By taking into account the efficiency and the necessary auxiliary information for decoding the watermark, we discuss the suitable decoder for various operating situations. Experimental results are carried out on a large dataset of IR images to show the imperceptibility and efficiency of the proposed scheme against various attack scenarios.
Arai, Yuji; Moran, P B; Honeyman, B D; Davis, J A
2007-06-01
Np(V) surface speciation on hematite surfaces at pH 7-9 under pC2 = 10(-3.45) atm was investigated using X-ray absorption spectroscopy (XAS). In situ XAS analyses suggest that bis-carbonato inner-sphere and tris-carbonato outer-sphere ternary surface species coexist at the hematite-water interface at pH 7-8.8, and the fraction of outer-sphere species gradually increases from 27 to 54% with increasing pH from 7 to 8.8. The results suggest that the heretofore unknown Np(V)-carbonato ternary surface species may be important in predicting the fate and transport of Np(V) in the subsurface environment down gradient of high-level nuclear waste respositories.
A novel parallel pipeline structure of VP9 decoder
NASA Astrophysics Data System (ADS)
Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan
2018-04-01
To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Decoding molecular interactions in microbial communities
Abreu, Nicole A.; Taga, Michiko E.
2016-01-01
Microbial communities govern numerous fundamental processes on earth. Discovering and tracking molecular interactions among microbes is critical for understanding how single species and complex communities impact their associated host or natural environment. While recent technological developments in DNA sequencing and functional imaging have led to new and deeper levels of understanding, we are limited now by our inability to predict and interpret the intricate relationships and interspecies dependencies within these communities. In this review, we highlight the multifaceted approaches investigators have taken within their areas of research to decode interspecies molecular interactions that occur between microbes. Understanding these principles can give us greater insight into ecological interactions in natural environments and within synthetic consortia. PMID:27417261
Generation of flower high-order Poincaré sphere laser beams from a spatial light modulator
NASA Astrophysics Data System (ADS)
Lu, T. H.; Huang, T. D.; Wang, J. G.; Wang, L. W.; Alfano, R. R.
2016-12-01
We propose and experimentally demonstrate a new complex laser beam with inhomogeneous polarization distributions mapping onto high-order Poincaré spheres (HOPSs). The complex laser mode is achieved by superposition of Laguerre-Gaussian modes and manifests exotic flower-like localization on intensity and phase profiles. A simple optical system is used to generate a polarization-variant distribution on the complex laser mode by superposition of orthogonal circular polarizations with opposite topological charges. Numerical analyses of the polarization distribution are consistent with the experimental results. The novel flower HOPS beams can act as a new light source for photonic applications.
NASA Astrophysics Data System (ADS)
Xing, Zhang-Fan; Greenberg, J. M.
1992-11-01
Results of an investigation of the analyticity of the complex extinction efficiency Q-tilde(ext) in different parameter domains are presented. In the size parameter domain, x = omega(a/c), numerical Hilbert transforms are used to study the analyticity properties of Q-tilde(ext) for homogeneous spheres. Q-tilde(ext) is found to be analytic in the entire lower complex x-tilde-plane when the refractive index, m, is fixed as a real constant (pure scattering) or infinity (perfect conductor); poles, however, appear in the left side of the lower complex x-tilde-plane as m becomes complex. The computation of the mean extinction produced by an extended size distribution of particles may be conveniently and accurately approximated using only a few values of the complex extinction evaluated in the complex plane.
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.
Architecture for time or transform domain decoding of reed-solomon codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)
1989-01-01
Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.
NASA Astrophysics Data System (ADS)
Kruk, Danuta; Kowalewski, Jozef
2002-07-01
This article describes paramagnetic relaxation enhancement (PRE) in systems with high electron spin, S, where there is molecular interaction between a paramagnetic ion and a ligand outside of the first coordination sphere. The new feature of our treatment is an improved handling of the electron-spin relaxation, making use of the Redfield theory. Following a common approach, a well-defined second coordination sphere is assumed, and the PRE contribution from these more distant and shorter-lived ligands is treated in a way similar to that used for the first coordination sphere. This model is called "ordered second sphere," OSS. In addition, we develop here a formalism similar to that of Hwang and Freed [J. Chem. Phys. 63, 4017 (1975)], but accounting for the electron-spin relaxation effects. We denote this formalism "diffuse second sphere," DSS. The description of the dynamics of the intermolecular dipole-dipole interaction is based on the Smoluchowski equation, with a potential of mean force related to the radial distribution function. We have used a finite-difference method to calculate numerically a correlation function for translational motion, taking into account the intermolecular forces leading to an arbitrary radial distribution of the ligand protons. The OSS and DSS models, including the Redfield description of the electron-spin relaxation, were used to interpret the PRE in an aqueous solution of a slowly rotating gadolinium (III) complex (S=7/2) bound to a protein.
Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun
2015-10-01
This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Facial decoding in schizophrenia is underpinned by basic visual processing impairments.
Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric
2017-09-01
Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Decoding Grasping Movements from the Parieto-Frontal Reaching Circuit in the Nonhuman Primate.
Nelissen, Koen; Fiave, Prosper Agbesi; Vanduffel, Wim
2018-04-01
Prehension movements typically include a reaching phase, guiding the hand toward the object, and a grip phase, shaping the hand around it. The dominant view posits that these components rely upon largely independent parieto-frontal circuits: a dorso-medial circuit involved in reaching and a dorso-lateral circuit involved in grasping. However, mounting evidence suggests a more complex arrangement, with dorso-medial areas contributing to both reaching and grasping. To investigate the role of the dorso-medial reaching circuit in grasping, we trained monkeys to reach-and-grasp different objects in the dark and determined if hand configurations could be decoded from functional magnetic resonance imaging (MRI) responses obtained from the reaching and grasping circuits. Indicative of their established role in grasping, object-specific grasp decoding was found in anterior intraparietal (AIP) area, inferior parietal lobule area PFG and ventral premotor region F5 of the lateral grasping circuit, and primary motor cortex. Importantly, the medial reaching circuit also conveyed robust grasp-specific information, as evidenced by significant decoding in parietal reach regions (particular V6A) and dorsal premotor region F2. These data support the proposed role of dorso-medial "reach" regions in controlling aspects of grasping and demonstrate the value of complementing univariate with more sensitive multivariate analyses of functional MRI (fMRI) data in uncovering information coding in the brain.
Dash, Prasanta K; Rai, Rhitu
2016-01-01
Evolutionary frozen, genetically sterile and globally iconic fruit "Banana" remained untouched by the green revolution and, as of today, researchers face intrinsic impediments for its varietal improvement. Recently, this wonder crop entered the genomics era with decoding of structural genome of double haploid Pahang (AA genome constitution) genotype of Musa acuminata . Its complex genome decoded by hybrid sequencing strategies revealed panoply of genes and transcription factors involved in the process of sucrose conversion that imparts sweetness to its fruit. Historically, banana has faced the wrath of pandemic bacterial, fungal, and viral diseases and multitude of abiotic stresses that has ruined the livelihood of small/marginal farmers' and destroyed commercial plantations. Decoding structural genome of this climacteric fruit has given impetus to a deeper understanding of the repertoire of genes involved in disease resistance, understanding the mechanism of dwarfing to develop an ideal plant type, unraveling the process of parthenocarpy, and fruit ripening for better fruit quality. Further, injunction of comparative genomics will usher in integration of information from its decoded genome and other monocots into field applications in banana related but not limited to yield enhancement, food security, livelihood assurance, and energy sustainability. In this mini review, we discuss pre- and post-genomic discoveries and highlight accomplishments in structural genomics, genetic engineering and forward genetic accomplishments with an aim to target genes and transcription factors for translational research in banana.
Simplified microprocessor design for VLSI control applications
NASA Technical Reports Server (NTRS)
Cameron, K.
1991-01-01
A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.
Chung, Kuo-Liang; Huang, Chi-Chao; Hsu, Tsu-Chun
2017-09-04
In this paper, we propose a novel adaptive chroma subsampling-binding and luma-guided (ASBLG) chroma reconstruction method for screen content images (SCIs). After receiving the decoded luma and subsampled chroma image from the decoder, a fast winner-first voting strategy is proposed to identify the used chroma subsampling scheme prior to compression. Then, the decoded luma image is subsampled as the identified subsampling scheme was performed on the chroma image such that we are able to conclude an accurate correlation between the subsampled decoded luma image and the decoded subsampled chroma image. Accordingly, an adaptive sliding window-based and luma-guided chroma reconstruction method is proposed. The related computational complexity analysis is also provided. We take two quality metrics, the color peak signal-to-noise ratio (CPSNR) of the reconstructed chroma images and SCIs and the gradient-based structure similarity index (CGSS) of the reconstructed SCIs to evaluate the quality performance. Let the proposed chroma reconstruction method be denoted as 'ASBLG'. Based on 26 typical test SCIs and 6 JCT-VC test screen content video sequences (SCVs), several experiments show that on average, the CPSNR gains of all the reconstructed UV images by 4:2:0(A)-ASBLG, SCIs by 4:2:0(MPEG-B)-ASBLG, and SCVs by 4:2:0(A)-ASBLG are 2.1 dB, 1.87 dB, and 1.87 dB, respectively, when compared with that of the other combinations. Specifically, in terms of CPSNR and CGSS, CSBILINEAR-ASBLG for the test SCIs and CSBICUBIC-ASBLG for the test SCVs outperform the existing state-of-the-art comparative combinations, where CSBILINEAR and CSBICUBIC denote the luma-aware based chroma subsampling schemes by Wang et al.
Bednar, Adam; Boland, Francis M; Lalor, Edmund C
2017-03-01
The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Mukherjee, Jhumpa; Lucas, Robie L.; Zart, Matthew K.; Powell, Douglas R.; Day, Victor W.; Borovik, A. S.
2013-01-01
Mononuclear iron(III) complexes with terminal hydroxo ligands are proposed to be important species in several metalloproteins, but they have been difficult to isolate in synthetic systems. Using a series of amidate/ureido tripodal ligands, we have prepared and characterized monomeric FeIIIOH complexes with similar trigonal-bipyramidal primary coordination spheres. Three anionic nitrogen donors define the trigonal plane, and the hydroxo oxygen atom is trans to an apical amine nitrogen atom. The complexes have varied secondary coordination spheres that are defined by intramolecular hydrogen bonds between the FeIIIOH unit and the urea NH groups. Structural trends were observed between the number of hydrogen bonds and the Fe–Ohydroxo bond distances: the more intramolecular hydrogen bonds there were, the longer the Fe–O bond became. Spectroscopic trends were also found, including an increase in the energy of the O–H vibrations with a decrease in the number of hydrogen bonds. However, the FeIII/II reduction potentials were constant throughout the series (∼2.0 V vs [Cp2Fe]0/+1), which is ascribed to a balancing of the primary and secondary coordination-sphere effects. PMID:18498155
The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Shao, H. M.; Deutsch, L. J.
1987-01-01
The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.
Analysis of electromagnetic scattering by uniaxial anisotropic bispheres.
Li, Zheng-Jun; Wu, Zhen-Sen; Li, Hai-Ying
2011-02-01
Based on the generalized multiparticle Mie theory and the Fourier transformation approach, electromagnetic (EM) scattering of two interacting homogeneous uniaxial anisotropic spheres with parallel primary optical axes is investigated. By introducing the Fourier transformation, the EM fields in the uniaxial anisotropic spheres are expanded in terms of the spherical vector wave functions. The interactive scattering coefficients and the expansion coefficients of the internal fields are derived through the continuous boundary conditions on which the interaction of the bispheres is considered. Some selected calculations on the effects of the size parameter, the uniaxial anisotropic absorbing dielectric, and the sphere separation distance are described. The backward radar cross section of two uniaxial anisotropic spheres with a complex permittivity tensor changing with the sphere separation distance is numerically studied. The authors are hopeful that the work in this paper will help provide an effective calibration for further research on the scattering characteristic of an aggregate of anisotropic spheres or other shaped anisotropic particles.
The serial message-passing schedule for LDPC decoding algorithms
NASA Astrophysics Data System (ADS)
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
The "periodic table" of the genetic code: A new way to look at the code and the decoding process.
Komar, Anton A
2016-01-01
Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.
Force spectroscopy of biomolecular folding and binding: theory meets experiment
NASA Astrophysics Data System (ADS)
Dudko, Olga
2015-03-01
Conformational transitions in biological macromolecules usually serve as the mechanism that brings biomolecules into their working shape and enables their biological function. Single-molecule force spectroscopy probes conformational transitions by applying force to individual macromolecules and recording their response, or ``mechanical fingerprints,'' in the form of force-extension curves. However, how can we decode these fingerprints so that they reveal the kinetic barriers and the associated timescales of a biological process? I will present an analytical theory of the mechanical fingerprints of macromolecules. The theory is suitable for decoding such fingerprints to extract the barriers and timescales. The application of the theory will be illustrated through recent studies on protein-DNA interactions and the receptor-ligand complexes involved in blood clot formation.
Coding/decoding and reversibility of droplet trains in microfluidic networks.
Fuerstman, Michael J; Garstecki, Piotr; Whitesides, George M
2007-02-09
Droplets of one liquid suspended in a second, immiscible liquid move through a microfluidic device in which a channel splits into two branches that reconnect downstream. The droplets choose a path based on the number of droplets that occupy each branch. The interaction among droplets in the channels results in complex sequences of path selection. The linearity of the flow through the microchannels, however, ensures that the behavior of the system can be reversed. This reversibility makes it possible to encrypt and decrypt signals coded in the intervals between droplets. The encoding/decoding device is a functional microfluidic system that requires droplets to navigate a network in a precise manner without the use of valves, switches, or other means of external control.
A long constraint length VLSI Viterbi decoder for the DSN
NASA Technical Reports Server (NTRS)
Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.
1988-01-01
A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
Parametric study of power absorption from electromagnetic waves by small ferrite spheres
NASA Technical Reports Server (NTRS)
Englert, Gerald W.
1989-01-01
Algebraic expressions in terms of elementary mathematical functions are derived for power absorption and dissipation by eddy currents and magnetic hysteresis in ferrite spheres. Skin depth is determined by using a variable inner radius in descriptive integral equations. Numerical results are presented for sphere diameters less than one wavelength. A generalized power absorption parameter for both eddy currents and hysteresis is expressed in terms of the independent parameters involving wave frequency, sphere radius, resistivity, and complex permeability. In general, the hysteresis phenomenon has a greater sensitivity to these independent parameters than do eddy currents over the ranges of independent parameters studied herein. Working curves are presented for obtaining power losses from input to the independent parameters.
Free Energy Calculations of Crystalline Hard Sphere Complexes Using Density Functional Theory
Gunawardana, K. G.S.H.; Song, Xueyu
2014-12-22
Recently developed fundamental measure density functional theory (FMT) is used to study binary hard sphere (HS) complexes in crystalline phases. By comparing the excess free energy, pressure and phase diagram, we show that the fundamental measure functional yields good agreements to the available simulation results of AB, AB 2 and AB 13 crystals. Additionally, we use this functional to study the HS models of five binary crystals, Cu 5Zr(C15 b), Cu 51Zr 14(β), Cu 10Zr 7(φ), CuZr(B2) and CuZr 2 (C11 b), which are observed in the Cu-Zr system. The FMT functional gives well behaved minimum for most of themore » hard sphere crystal complexes in the two dimensional Gaussian space, namely a crystalline phase. However, the current version of FMT functional (white Bear) fails to give a stable minimum for the structure Cu 10Zr 7(φ). We argue that the observed solid phases for the HS models of the Cu-Zr system are true thermodynamic stable phases and can be used as a reference system in perturbation calculations.« less
A study on adsorption mechanism of organoarsenic compounds on ferrihydrite by XAFS
NASA Astrophysics Data System (ADS)
Tanaka, M.; Takahashi, Y.; Yamaguchi, N.
2013-04-01
Anthropogenic organoarsenic compounds which were used such as agrochemicals, pesticides, and herbicides can have a potential as a source of arsenic pollution in water. In the process, the adsorption of arsenic onto mineral surface in soil may play an important role to affect arsenic distribution in solid-water interface. However, adsorption structures of organoarsenic compounds on the iron-(oxyhydr)oxides are not well known. In this study, extended X-ray absorption fine structure (EXAFS) spectroscopy was employed to know the adsorption structure of methyl- and phenyl-substituted organoarsenic compounds (methylarsonic acid (MMA), dimethylarsinic acid (DMA), phenylarsonic acid (PAA), and diphenylarsinic acid (DPAA) onto ferrihydrite which can be a strong adsorbent of arsenic. EXAFS analysis suggests that the formation of inner-sphere surface complex for all organoarsenic compounds with ferrihydrite regardless of the organic functional groups and the number of substitution. The As-Fe distances are around 3.27 , which suggests both mono-and bi-dentate inner-sphere complexes by DFT calculations. The corresponding coordination numbers (CNs) are less than two, suggesting that coexistence of both structures of inner-sphere complexes.
Acquiring the Complex English Orthography: A Triliteracy Advantage?
ERIC Educational Resources Information Center
Kahn-Horwitz, Janina; Schwartz, Mila; Share, David
2011-01-01
The "script-dependence hypothesis" was tested through the examination of the impact of Russian and Hebrew literacy on English orthographic knowledge needed for spelling and decoding among fifth graders. We compared the performance of three groups: Russian-Hebrew-speaking emerging triliterates, Russian-Hebrew-speaking emerging biliterates who were…
High-throughput illumina strand-specific RNA sequencing library preparation
USDA-ARS?s Scientific Manuscript database
Conventional Illumina RNA-Seq does not have the resolution to decode the complex eukaryote transcriptome due to the lack of RNA polarity information. Strand-specific RNA sequencing (ssRNA-Seq) can overcome these limitations and as such is better suited for genome annotation, de novo transcriptome as...
How Can I Help My Struggling Readers?
ERIC Educational Resources Information Center
Duke, Nell K.; Pressley, Michael
2005-01-01
The reasons some children struggle with reading are as varied as the children themselves. From trouble decoding words to problems retaining information, reading difficulties are complex. All kids, says the International Reading Association, "have a right to instruction designed with their specific needs in mind." The question is how to identify…
Field Day: A Case Study examining scientists’ oral performance skills
USDA-ARS?s Scientific Manuscript database
Communication is a complex cyclic process wherein senders and receivers encode and decode information in an effort to reach a state of mutuality or mutual understanding. When the communication of scientific or technical information occurs in a public space, effective speakers follow a formula for co...
Eliminating ambiguity in digital signals
NASA Technical Reports Server (NTRS)
Weber, W. J., III
1979-01-01
Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.
Decoding the ecological function of accessory genome
USDA-ARS?s Scientific Manuscript database
Shiga toxin-producing Escherichia coli O157:H7 primarily resides in cattle asymptomatically, and can be transmitted to humans through food. A study by Lupolova et al applied a machine-learning approach to complex pan-genome information and predicted that only a small subset of bovine isolates have t...
How should spin-weighted spherical functions be defined?
NASA Astrophysics Data System (ADS)
Boyle, Michael
2016-09-01
Spin-weighted spherical functions provide a useful tool for analyzing tensor-valued functions on the sphere. A tensor field can be decomposed into complex-valued functions by taking contractions with tangent vectors on the sphere and the normal to the sphere. These component functions are usually presented as functions on the sphere itself, but this requires an implicit choice of distinguished tangent vectors with which to contract. Thus, we may more accurately say that spin-weighted spherical functions are functions of both a point on the sphere and a choice of frame in the tangent space at that point. The distinction becomes extremely important when transforming the coordinates in which these functions are expressed, because the implicit choice of frame will also transform. Here, it is proposed that spin-weighted spherical functions should be treated as functions on the spin or rotation groups, which simultaneously tracks the point on the sphere and the choice of tangent frame by rotating elements of an orthonormal basis. In practice, the functions simply take a quaternion argument and produce a complex value. This approach more cleanly reflects the geometry involved, and allows for a more elegant description of the behavior of spin-weighted functions. In this form, the spin-weighted spherical harmonics have simple expressions as elements of the Wigner 𝔇 representations, and transformations under rotation are simple. Two variants of the angular-momentum operator are defined directly in terms of the spin group; one is the standard angular-momentum operator L, while the other is shown to be related to the spin-raising operator ð.
Mollazadeh, Mohsen; Davidson, Adam G.; Schieber, Marc H.; Thakor, Nitish V.
2013-01-01
The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation. PMID:23536714
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.
2009-04-01
Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic properties. Langmuir20, 4954-4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl - which was common to all solutions, but also for Rb + and K +. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na + ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb +, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.
Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise,more » molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic properties. Langmuir 20, 4954 4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl which was common to all solutions, but also for Rb+ and K+. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na+ ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb+, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.« less
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Bayesian decoding using unsorted spikes in the rat hippocampus
Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403
Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping
2017-01-01
Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed algorithms. Our findings provide new insights for extending current BMI design concepts and techniques on upper limbs to lower limb circumstances. Brain controlled exoskeleton, prostheses or neuromuscular electrical stimulators for lower limbs are expected to be developed, which enables the subject to manipulate complex biomechatronic devices with mind in more harmonized manner. PMID:28223914
Dash, Prasanta K.; Rai, Rhitu
2016-01-01
Evolutionary frozen, genetically sterile and globally iconic fruit “Banana” remained untouched by the green revolution and, as of today, researchers face intrinsic impediments for its varietal improvement. Recently, this wonder crop entered the genomics era with decoding of structural genome of double haploid Pahang (AA genome constitution) genotype of Musa acuminata. Its complex genome decoded by hybrid sequencing strategies revealed panoply of genes and transcription factors involved in the process of sucrose conversion that imparts sweetness to its fruit. Historically, banana has faced the wrath of pandemic bacterial, fungal, and viral diseases and multitude of abiotic stresses that has ruined the livelihood of small/marginal farmers’ and destroyed commercial plantations. Decoding structural genome of this climacteric fruit has given impetus to a deeper understanding of the repertoire of genes involved in disease resistance, understanding the mechanism of dwarfing to develop an ideal plant type, unraveling the process of parthenocarpy, and fruit ripening for better fruit quality. Further, injunction of comparative genomics will usher in integration of information from its decoded genome and other monocots into field applications in banana related but not limited to yield enhancement, food security, livelihood assurance, and energy sustainability. In this mini review, we discuss pre- and post-genomic discoveries and highlight accomplishments in structural genomics, genetic engineering and forward genetic accomplishments with an aim to target genes and transcription factors for translational research in banana. PMID:27833619
Code of Federal Regulations, 2010 CFR
2010-10-01
... decoders manufactured after August 1, 2003 must provide a means to permit the selective display and logging... upgrade their decoders on an optional basis to include a selective display and logging capability for EAS... decoders after February 1, 2004 must install decoders that provide a means to permit the selective display...
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.
Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A
2016-06-01
The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Nagaraj, Karuppiah; Senthil Murugan, Krishnan; Thangamuniyandi, Pilavadi; Sakthinathan, Subramanian
2015-05-01
The kinetics of outer sphere electron transfer reaction of surfactant cobalt(III) complex ions, cis-[Co(en)2(C12H25NH2)2]3+ (1), cis-[Co(dp)2(C12H25NH2)2]3+ (2), cis-[Co(trien)(C12H25NH2)2]3+ (3), cis-[Co(bpy)2(C12H25NH2)2]3+ (4) and cis-[Co(phen)2(C12H25NH2)2]3+ (5) (en: ethylenediamine, dp: diaminopropane, trien : triethylenetetramine, bpy: 2,2‧-bipyridyl, phen: 1,10-phenanthroline and C12H25NH2 : dodecylamine) have been interrogated by Fe2+ ion in ionic liquid (1-butyl-3-methylimidazoliumbromide) medium at different temperatures (298, 303, 308, 313, 318 and 323 K) by the spectrophotometry method under pseudo first order conditions using an excess of the reductant. Experimentally the reactions were found to be of second order and the electron transfer as outer sphere. The second order rate constant for the electron transfer reaction in ionic liquids was found to increase with increase in the concentration of all these surfactant cobalt(III) complexes. Among these complexes (from en to phen ligand), complex containing the phenanthroline ligand rate is higher compared to other complexes. By assuming the outer sphere mechanism, the results have been explained based on the presence of aggregated structures containing cobalt(III) complexes at the surface of ionic liquids formed by the surfactant cobalt(III) complexes in the reaction medium. The activation parameters (enthalpy of activation ΔH‡ and entropy of activation ΔS‡) of the reaction have been calculated which substantiate the kinetics of the reaction.
Cao, Zhiji; Balasubramanian, K
2009-10-28
Extensive ab initio calculations have been carried out to study equilibrium structures, vibrational frequencies, and the nature of chemical bonds of hydrated UO(2)(OH)(+), UO(2)(OH)(2), NpO(2)(OH), and PuO(2)(OH)(+) complexes that contain up to 21 water molecules both in first and second hydration spheres in both aqueous solution and the gas phase. The structures have been further optimized by considering long-range solvent effects through a polarizable continuum dielectric model. The hydrolysis reaction Gibbs free energy of UO(2)(H(2)O)(5) (2+) is computed to be 8.11 kcal/mol at the MP2 level in good agreement with experiments. Our results reveal that it is necessary to include water molecules bound to the complex in the first hydration sphere for proper treatment of the hydrated complex and the dielectric cavity although water molecules in the second hydration sphere do not change the coordination complex. Structural reoptimization of the complex in a dielectric cavity seems inevitable to seek subtle structural variations in the solvent and to correlate with the observed spectra and thermodynamic properties in the aqueous environment. Our computations reveal dramatically different equilibrium structures in the gas phase and solution and also confirm the observed facile exchanges between the complex and bulk solvent. Complete active space multiconfiguration self-consistent field followed by multireference singles+doubles CI (MRSDCI) computations on smaller complexes confirm predominantly single-configurational nature of these species and the validity of B3LYP and MP2 techniques for these complexes in their ground states.
Temperature-dependent and optimized thermal emission by spheres
NASA Astrophysics Data System (ADS)
Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.
2018-03-01
We investigate the temperature and size dependencies of thermal emission by homogeneous spheres as a function of their dielectric properties. Different power laws obtained in this work show that the emitted power can depart strongly from the usual fourth power of temperature given by Planck's law and from the square or the cube of the radius. We also show how to optimize the thermal emission by selecting permittivities leading to resonances, which allow for the so-called super-Planckian regime. These results will be useful as spheres, i.e. the simplest finite objects, are often considered as building blocks of more complex objects.
NASA Astrophysics Data System (ADS)
Uçar, İbrahim; Karabulut, Bünyamin; Bulut, Ahmet; Büyükgüngör, Orhan
2007-05-01
The (2-amino-4-methylpyrimidine)-(pyridine-2,6-dicarboxylato)copper(II) monohydrate complex was synthesized and characterized by spectroscopic (IR, UV/Vis, EPR), thermal (TG/DTA) and electrochemical methods. X-ray structural analysis of the title complex revealed that the copper ion can be considered to have two coordination spheres. In the first coordination sphere the copper ion forms distorted square-planar geometry with trans-N 2O 2 donor set, and also the metal ion is weakly bonded to the amino-nitrogen in the layer over and to the carboxylic oxygen in the layer underneath in the second coordination sphere. The second coordination environment on the copper ion is attributed to pseudo octahedron. The powder EPR spectra of Cu(II) complex at room and liquid nitrogen temperature were recorded. The calculated g and A parameters have indicated that the paramagnetic centre is axially symmetric. The molecular orbital bond coefficients of the Cu(II) ion in d 9 state is also calculated by using EPR and optical absorption parameters. The cyclic voltammogram of the title complex investigated in DMSO (dimethylsulfoxide) solution exhibits only metal centered electroactivity in the potential range -1.25 to 1.5 V versus Ag/AgCl reference electrode.
On complexity of trellis structure of linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1990-01-01
The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.
NASA Astrophysics Data System (ADS)
Strathmann, Timothy J.; Myneni, Satish C. B.
2004-09-01
Aqueous solutions containing Ni(II) and a series of structurally related carboxylic acids were analyzed using attenuated total reflection Fourier transform infrared spectroscopy (ATR-FTIR) and Ni K-edge X-ray absorption fine structure spectroscopy (XAFS). XAFS spectra were also collected for solutions containing Ni 2+ and chelating ligands (ethylenediaminetetraacetic acid, nitrilotriacetic acid (NTA)) as well as soil fulvic acid. Limited spectral changes are observed for aqueous Ni(II) complexes with monocarboxylates (formate, acetate) and long-chain polycarboxylates (succinate, tricarballylate), where individual donor groups are separated by multiple bridging methylene groups. These spectral changes indicate weak interactions between Ni(II) and carboxylates, and the trends are similar to some earlier reports for crystalline Ni(II)-acetate solids, for which X-ray crystallography studies have indicated monodentate Ni(II)-carboxylate coordination. Nonetheless, electrostatic or outer-sphere coordination cannot be ruled out for these complexes. However, spectral changes observed for short-chain dicarboxylates (oxalate, malonate) and carboxylates that contain an alcohol donor group adjacent to one of the carboxylate groups (lactate, malate, citrate) demonstrate inner-sphere metal coordination by multiple donor groups. XAFS spectral fits of Ni(II) solutions containing soil fulvic acid are consistent with inner-sphere Ni(II) coordination by one or more carboxylate groups, but spectra are noisy and outer-sphere modes of coordination cannot be ruled out. These molecular studies refine our understanding of the interactions between carboxylates and weakly complexing divalent transition metals, such as Ni(II).
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Evolution of Protein Synthesis from an RNA World
Noller, Harry F.
2012-01-01
SUMMARY Because of the molecular complexity of the ribosome and protein synthesis, it is a challenge to imagine how translation could have evolved from a primitive RNA World. Two specific suggestions are made here to help to address this, involving separate evolution of the peptidyl transferase and decoding functions. First, it is proposed that translation originally arose not to synthesize functional proteins, but to provide simple (perhaps random) peptides that bound to RNA, increasing its available structure space, and therefore its functional capabilities. Second, it is proposed that the decoding site of the ribosome evolved from a mechanism for duplication of RNA. This process involved homodimeric “duplicator RNAs,” resembling the anticodon arms of tRNAs, which directed ligation of trinucleotides in response to an RNA template. PMID:20610545
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
Kostal, Lubomir; Kobayashi, Ryota
2015-10-01
Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Form + Theme + Context: Balancing Considerations for Meaningful Art Learning
ERIC Educational Resources Information Center
Sandell, Renee
2006-01-01
Today's students need visual literacy skills and knowledge that enable them to encode concepts as well as decode the meaning of society's images, ideas, and media of the past as well as the increasingly complex visual world. In this article, the author discusses how art teachers can help students understand the increasingly visual/material…
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Maya: A Simulation of Mayan Civilization during the Seventh Century.
ERIC Educational Resources Information Center
Roth, Peter
This simulation allows students to explore the lives of the great rulers of the Mayan culture. Students learn the mysterious history of the Maya by decoding glyphs, investigating the unusual religion of the Maya, unraveling the complex Mayan calendar, and discovering the Mayan number system's secret meanings. Specific cooperation skills are taught…
The Most Frequent Metacognitive Strategies Used in Reading Comprehension among ESP Learners
ERIC Educational Resources Information Center
Khoshsima, Hooshang; Samani, Elham Amiri
2015-01-01
Reading strategies are plans for solving problems encountered during reading while learners are deeply engage with the text. So, comprehension is not a simple decoding of symbols, but a complex multidimensional process in which the leaner draws on previous schemata applying strategies consciously. In fact, metacognitive strategies are accessible…
Learner Variables Associated with Reading and Learning in a Hypertext Environment.
ERIC Educational Resources Information Center
Niederhauser, Dale S.; Shapiro, Amy
While many elements like character decoding, word recognition, comprehension, and others remain the same as in learning from traditional text, when learning from hypertext, a number of features that are unique to reading hypertext produce added complexity. It is these features that drive research on hypertext in education. There is a greater…
ERIC Educational Resources Information Center
Huang, Koongliang; Itoh, Kosuke; Kwee, Ingrid L.; Nakada, Tsutomu
2012-01-01
Japanese and Chinese share virtually identical morphographic characters invented in ancient China. Whereas modern Chinese retained the original morphographic functionality of these characters (hanzi), modern Japanese utilizes these characters (kanji) as complex syllabograms. This divergence provides a unique opportunity to systematically…
Maximum Likelihood Detection of Low Rate Repeat Codes in Frequency Hopped Systems
2013-04-01
Communications, vol. 33, pp. 385 – 393, May 1985. [4] W. Sweldens, “Fast block noncoherent decoding,” IEEE Commun. Lett., vol. 5, pp. 132–134, Apr...2001. [5] I. Motedayen-Aval and A. Anastasopoulos, “Polynomial- complexity noncoherent symbol-by-symbol detection with application to adaptive
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Pietzke, Matthias; Zasada, Christin; Mudrich, Susann; Kempa, Stefan
2014-01-01
Cellular metabolism is highly dynamic and continuously adjusts to the physiological program of the cell. The regulation of metabolism appears at all biological levels: (post-) transcriptional, (post-) translational, and allosteric. This regulatory information is expressed in the metabolome, but in a complex manner. To decode such complex information, new methods are needed in order to facilitate dynamic metabolic characterization at high resolution. Here, we describe pulsed stable isotope-resolved metabolomics (pSIRM) as a tool for the dynamic metabolic characterization of cellular metabolism. We have adapted gas chromatography-coupled mass spectrometric methods for metabolomic profiling and stable isotope-resolved metabolomics. In addition, we have improved robustness and reproducibility and implemented a strategy for the absolute quantification of metabolites. By way of examples, we have applied this methodology to characterize central carbon metabolism of a panel of cancer cell lines and to determine the mode of metabolic inhibition of glycolytic inhibitors in times ranging from minutes to hours. Using pSIRM, we observed that 2-deoxyglucose is a metabolic inhibitor, but does not directly act on the glycolytic cascade.
Energetics of codon-anticodon recognition on the small ribosomal subunit.
Almlöf, Martin; Andér, Martin; Aqvist, Johan
2007-01-09
Recent crystal structures of the small ribosomal subunit have made it possible to examine the detailed energetics of codon recognition on the ribosome by computational methods. The binding of cognate and near-cognate anticodon stem loops to the ribosome decoding center, with mRNA containing the Phe UUU and UUC codons, are analyzed here using explicit solvent molecular dynamics simulations together with the linear interaction energy (LIE) method. The calculated binding free energies are in excellent agreement with experimental binding constants and reproduce the relative effects of mismatches in the first and second codon position versus a mismatch at the wobble position. The simulations further predict that the Leu2 anticodon stem loop is about 10 times more stable than the Ser stem loop in complex with the Phe UUU codon. It is also found that the ribosome significantly enhances the intrinsic stability differences of codon-anticodon complexes in aqueous solution. Structural analysis of the simulations confirms the previously suggested importance of the universally conserved nucleotides A1492, A1493, and G530 in the decoding process.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Shape-controlled synthesis and properties of dandelion-like manganese sulfide hollow spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wei; State Key Laboratory of Powder Metallurgy, Central South University, Changsha, Hunan 410083; Chen, Gen
2012-09-15
Graphical abstract: Dandelion-like MnS hollow spheres assembled with nanorods could be successfully synthesized in large quantities through a simple and convenient hydrothermal synthetic method under mild conditions using soluble hydrated manganese chloride as Mn source, L-cysteine as both a precipitator and complexing reagent. The dandelion-like MnS hollow spheres might have potential applications in microdevices and magnetic cells. Highlights: ► MnS hollow spheres assembled with nanorods could be synthesized. ► The morphologies and sizes of final products could be controlled. ► Possible formation mechanism of MnS hollow spheres is proposed. -- Abstract: Dandelion-like gamma-manganese (II) sulfide (MnS) hollow spheres assembled withmore » nanorods have been prepared via a hydrothermal process in the presence of L-cysteine and polyvinylpyrrolidone (PVP). L-cysteine was employed as not only sulfur source, but also coordinating reagent for the synthesis of dandelion-like MnS hollow spheres. The morphology, structure and properties of as-prepared products have been investigated in detail by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), energy dispersive X-ray spectroscopy (EDS), selected area electron diffraction (SAED), high-resolution transmission electron microscopy (HRTEM) and photoluminescence spectra (PL). The probable formation mechanism of as-prepared MnS hollow spheres was discussed on the basis of the experimental results. This strategy may provide an effective method for the fabrication of other metal sulfides hollow spheres.« less
Efficient Decoding of Compressed Data.
ERIC Educational Resources Information Center
Bassiouni, Mostafa A.; Mukherjee, Amar
1995-01-01
Discusses the problem of enhancing the speed of Huffman decoding of compressed data. Topics addressed include the Huffman decoding tree; multibit decoding; binary string mapping problems; and algorithms for solving mapping problems. (22 references) (LRW)
A new VLSI architecture for a single-chip-type Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.
1989-01-01
A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.
Deconstructing multivariate decoding for the study of brain function.
Hebart, Martin N; Baker, Chris I
2017-08-04
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Directionally Interacting Spheres and Rods Form Ordered Phases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyan; Mahynski, Nathan A.; Gang, Oleg
The structures formed by mixtures of dissimilarly shaped nanoscale objects can significantly enhance our ability to produce nanoscale architectures. However, understanding their formation is a complex problem due to the interplay of geometric effects (entropy) and energetic interactions at the nanoscale. Spheres and rods are perhaps the most basic geometrical shapes and serve as convenient models of such dissimilar objects. The ordered phases formed by each of these individual shapes have already been explored, but, when mixed, spheres and rods have demonstrated only limited structural organization to date. We show using experiments and theory that the introduction of directional attractionsmore » between rod ends and isotropically interacting spherical nanoparticles (NPs) through DNA base pairing leads to the formation of ordered three-dimensional lattices. The spheres and rods arrange themselves in a complex alternating manner, where the spheres can form either a face-centered cubic (FCC) or hexagonal close-packed (HCP) lattice, or a disordered phase, as observed by in situ X-ray scattering. Increasing NP diameter at fixed rod length yields an initial transition from a disordered phase to the HCP crystal, energetically stabilized by rod-rod attraction across alternating crystal layers, as revealed by theory. In the limit of large NPs, the FCC structure is instead stabilized over the HCP by rod entropy. Thus, we propose that directionally specific attractions in mixtures of anisotropic and isotropic objects offer insight into unexplored self-assembly behavior of noncomplementary shaped particles.« less
Directionally Interacting Spheres and Rods Form Ordered Phases
Liu, Wenyan; Mahynski, Nathan A.; Gang, Oleg; ...
2017-05-10
The structures formed by mixtures of dissimilarly shaped nanoscale objects can significantly enhance our ability to produce nanoscale architectures. However, understanding their formation is a complex problem due to the interplay of geometric effects (entropy) and energetic interactions at the nanoscale. Spheres and rods are perhaps the most basic geometrical shapes and serve as convenient models of such dissimilar objects. The ordered phases formed by each of these individual shapes have already been explored, but, when mixed, spheres and rods have demonstrated only limited structural organization to date. We show using experiments and theory that the introduction of directional attractionsmore » between rod ends and isotropically interacting spherical nanoparticles (NPs) through DNA base pairing leads to the formation of ordered three-dimensional lattices. The spheres and rods arrange themselves in a complex alternating manner, where the spheres can form either a face-centered cubic (FCC) or hexagonal close-packed (HCP) lattice, or a disordered phase, as observed by in situ X-ray scattering. Increasing NP diameter at fixed rod length yields an initial transition from a disordered phase to the HCP crystal, energetically stabilized by rod-rod attraction across alternating crystal layers, as revealed by theory. In the limit of large NPs, the FCC structure is instead stabilized over the HCP by rod entropy. Thus, we propose that directionally specific attractions in mixtures of anisotropic and isotropic objects offer insight into unexplored self-assembly behavior of noncomplementary shaped particles.« less
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
Systolic VLSI Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.
1986-01-01
Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
A test of the role of the medial temporal lobe in single-word decoding.
Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph
2011-01-15
The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. Copyright © 2010 Elsevier Inc. All rights reserved.
A Test of the Role of the Medial Temporal Lobe in Single-Word Decoding
Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph
2012-01-01
The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus, nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. PMID:20884357
2014-08-15
CAPE CANAVERAL, Fla. – The Kennedy Space Center Visitor Complex Spaceperson poses for a photo with Carver Middle School students and their teacher from Orlando, Florida, during the Zero Robotics finals competition at NASA Kennedy Space Center's Space Station Processing Facility in Florida. The team, members of the After School All-Stars, were regional winners and advanced to the final competition. For the competition, students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Kaplun, Marina; Nordin, Agneta; Persson, Per
2008-01-15
The structure of palladium(II) ethylenediaminetetraacetate (edta) in aqueous solutions and its adsorption on the surface of goethite (alpha-FeOOH) were studied using extended X-ray absorption fine structure spectroscopy and attenuated total reflection Fourier transform infrared spectroscopy. The obtained results show that in aqueous solutions, Pd-edta exists as a 1:1 complex, [Pd(edta)]2-, with edta acting as a quadridentate ligand. On the surface of goethite, [Pd(edta)]2- forms two different types of complexes over a pH range of 3.40-8.12. At pH < 5, [Pd(edta)]2- adsorbs as an outer-sphere species with possible hydrogen bonding. At higher pH values, the formation of inner-sphere complexes of the cation-type sets in after a cleavage of one glycinate ring and the formation of an (edta)Pd-O-Fe linkage.
Wright, Michael T.; Fram, Miranda S.; Belitz, Kenneth
2015-01-01
Concentrations of strontium, which exists primarily in a cationic form (Sr2+), were not significantly correlated with either groundwater age or pH. Strontium concentrations showed a strong positive correlation with total dissolved solids (TDS). Dissolved constituents, such as Sr, that interact with mineral surfaces through outer-sphere complexation become increasingly soluble with increasing TDS concentrations of groundwater. Boron concentrations also showed a significant positive correlation with TDS, indicating the B may interact to a large degree with mineral surfaces through outer-sphere complexation.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight
NASA Astrophysics Data System (ADS)
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
NASA Astrophysics Data System (ADS)
Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav
2018-04-01
Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.
Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios
2014-01-01
To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408
Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam
2014-10-01
To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.
A Scalable Architecture of a Structured LDPC Decoder
NASA Technical Reports Server (NTRS)
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Visual perception as retrospective Bayesian decoding from high- to low-level features
Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning
2017-01-01
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108
Complexation of carboxylate on smectite surfaces.
Liu, Xiandong; Lu, Xiancai; Zhang, Yingchun; Zhang, Chi; Wang, Rucheng
2017-07-19
We report a first principles molecular dynamics (FPMD) study of carboxylate complexation on clay surfaces. By taking acetate as a model carboxylate, we investigate its inner-sphere complexes adsorbed on clay edges (including (010) and (110) surfaces) and in interlayer space. Simulations show that acetate forms stable monodentate complexes on edge surfaces and a bidentate complex with Ca 2+ in the interlayer region. The free energy calculations indicate that the complexation on edge surfaces is slightly more stable than in interlayer space. By integrating pK a s and desorption free energies of Al coordinated water calculated previously (X. Liu, X. Lu, E. J. Meijer, R. Wang and H. Zhou, Geochim. Cosmochim. Acta, 2012, 81, 56-68; X. Liu, J. Cheng, M. Sprik, X. Lu and R. Wang, Geochim. Cosmochim. Acta, 2014, 140, 410-417), the pH dependence of acetate complexation has been revealed. It shows that acetate forms inner-sphere complexes on (110) in a very limited mildly acidic pH range while it can complex on (010) in the whole common pH range. The results presented in this study form a physical basis for understanding the geochemical processes involving clay-organics interactions.
Self-templating synthesis of hollow spheres of MOFs and their derived nanostructures.
Chuan Tan, Ying; Chun Zeng, Hua
2016-10-04
An aqueous one-pot self-templating synthesis method to prepare highly uniform ZIF-67 hollow spheres (ZIF-67-HS) and their transition metal-doped derivatives (M/ZIF-67-HS, M = Cu and/or Zn) was developed. Extension of this approach to another important class of MOFs (metal carboxylates; e.g., HKUST-1) and facile design of derived nanostructures with complex architectures were also achieved.
A Scope of Learner Behaviors in Reading.
ERIC Educational Resources Information Center
Lake Washington School District 414, Kirkland, WA.
This guide reflects the definition of reading as a complex intellectual act involving a variety of behaviors to decode and comprehend printed symbols and is intended to help the teacher be aware of all the skills within each dimension of the reading act. The reading skills are described in terms of learner behavior and a criterion reference is…
Zhang, Wanlin; Gao, Ning; Cui, Jiecheng; Wang, Chen; Wang, Shiqiang; Zhang, Guanxin; Dong, Xiaobiao
2017-01-01
By simultaneously exploiting the unique properties of ionic liquids and aggregation-induced emission (AIE) luminogens, as well as photonic structures, a novel customizable sensing system for multi-analytes was developed based on a single AIE-doped poly(ionic liquid) photonic sphere. It was found that due to the extraordinary multiple intermolecular interactions involved in the ionic liquid units, one single sphere could differentially interact with broader classes of analytes, thus generating response patterns with remarkable diversity. Moreover, the optical properties of both the AIE luminogen and photonic structure integrated in the poly(ionic liquid) sphere provide multidimensional signal channels for transducing the involved recognition process in a complementary manner and the acquisition of abundant and sufficient sensing information could be easily achieved on only one sphere sensor element. More importantly, the sensing performance of our poly(ionic liquid) photonic sphere is designable and customizable through a simple ion-exchange reaction and target-oriented multi-analyte sensing can be conveniently realized using a selective receptor species, such as counterions, showing great flexibility and extendibility. The power of our single sphere-based customizable sensing system was exemplified by the successful on-demand detection and discrimination of four multi-analyte challenge systems: all 20 natural amino acids, nine important phosphate derivatives, ten metal ions and three pairs of enantiomers. To further demonstrate the potential of our spheres for real-life application, 20 amino acids in human urine and their 26 unprecedented complex mixtures were also discriminated between by the single sphere-based array. PMID:28989662
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Athavale, M. M.; Lattime, S. B.; Braun, M. J.
1998-01-01
A videotape presentation of flow in a packed bed of spheres is provided. The flow experiment consisted of three principal elements: (1) an oil tunnel 76.2 mm by 76.2 mm in cross section, (2) a packed bed of spheres in regular and irregular arrays, and (3) a flow characterization methodology, either (a) full flow field tracking (FFFT) or (b) computational fluid dynamic (CFD) simulation. The refraction indices of the oil and the test array of spheres were closely matched, and the flow was seeded with aluminum oxide particles. Planar laser light provided a two-dimensional projection of the flow field, and a traverse simulated a three-dimensional image of the entire flow field. Light focusing and reflection rendered the spheres black, permitting visualization of the planar circular interfaces in both the axial and transverse directions. Flows were observed near the wall-sphere interface and within the set of spheres. The CFD model required that a representative section of a packed bed be formed and gridded, enclosing and cutting six spheres so that symmetry conditions could be imposed at all cross-boundaries. Simulations had to be made with the flow direction at right angles to that used in the experiments, however, to take advantage of flow symmetry. Careful attention to detail was required for proper gridding. The flow field was three-dimensional and complex to describe, yet the most prominent finding was flow threads, as computed in the representative 'cube' of spheres with face symmetry and conclusively demonstrated experimentally herein. Random packing and bed voids tended to disrupt the laminar flow, creating vortices.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Complex patchy colloids shaped from deformable seed particles through capillary interactions.
Meester, V; Kraft, D J
2018-02-14
We investigate the mechanisms underlying the reconfiguration of random aggregates of spheres through capillary interactions, the so-called "colloidal recycling" method, to fabricate a wide variety of patchy particles. We explore the influence of capillary forces on clusters of deformable seed particles by systematically varying the crosslink density of the spherical seeds. Spheres with a poorly crosslinked polymer network strongly deform due to capillary forces and merge into large spheres. With increasing crosslink density and therefore rigidity, the shape of the spheres is increasingly preserved during reconfiguration, yielding patchy particles of well-defined shape for up to five spheres. In particular, we find that the aspect ratio between the length and width of dumbbells, L/W, increases with the crosslink density (cd) as L/W = B - A·exp(-cd/C). For clusters consisting of more than five spheres, the particle deformability furthermore determines the patch arrangement of the resulting particles. The reconfiguration pathway of clusters of six densely or poorly crosslinked seeds leads to octahedral and polytetrahedral shaped patchy particles, respectively. For seven particles several geometries were obtained with a preference for pentagonal dipyramids by the rigid spheres, while the soft spheres do rarely arrive in these structures. Even larger clusters of over 15 particles form non-uniform often aspherical shapes. We discuss that the reconfiguration pathway is largely influenced by confinement and geometric constraints. The key factor which dominates during reconfiguration depends on the deformability of the spherical seed particles.
Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces
Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.
2015-01-01
Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627
Encoding and Decoding Models in Cognitive Electrophysiology
Holdgraf, Christopher R.; Rieger, Jochem W.; Micheli, Cristiano; Martin, Stephanie; Knight, Robert T.; Theunissen, Frederic E.
2017-01-01
Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses. PMID:29018336
Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A
2017-04-01
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
NASA Astrophysics Data System (ADS)
Mirkovic, Bojana; Debener, Stefan; Jaeger, Manuela; De Vos, Maarten
2015-08-01
Objective. Recent studies have provided evidence that temporal envelope driven speech decoding from high-density electroencephalography (EEG) and magnetoencephalography recordings can identify the attended speech stream in a multi-speaker scenario. The present work replicated the previous high density EEG study and investigated the necessary technical requirements for practical attended speech decoding with EEG. Approach. Twelve normal hearing participants attended to one out of two simultaneously presented audiobook stories, while high density EEG was recorded. An offline iterative procedure eliminating those channels contributing the least to decoding provided insight into the necessary channel number and optimal cross-subject channel configuration. Aiming towards the future goal of near real-time classification with an individually trained decoder, the minimum duration of training data necessary for successful classification was determined by using a chronological cross-validation approach. Main results. Close replication of the previously reported results confirmed the method robustness. Decoder performance remained stable from 96 channels down to 25. Furthermore, for less than 15 min of training data, the subject-independent (pre-trained) decoder performed better than an individually trained decoder did. Significance. Our study complements previous research and provides information suggesting that efficient low-density EEG online decoding is within reach.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Light-triggered self-assembly of triarylamine-based nanospheres
NASA Astrophysics Data System (ADS)
Moulin, Emilie; Niess, Frédéric; Fuks, Gad; Jouault, Nicolas; Buhler, Eric; Giuseppone, Nicolas
2012-10-01
Tailored triarylamine units modified with terpyridine ligands were coordinated to Zn2+ ions and characterized as discrete dimeric entities. Interestingly, when these complexes were subsequently irradiated with simple visible light in chloroform, they readily self-assembled into monodisperse spheres with a mean diameter of 160 nm.Tailored triarylamine units modified with terpyridine ligands were coordinated to Zn2+ ions and characterized as discrete dimeric entities. Interestingly, when these complexes were subsequently irradiated with simple visible light in chloroform, they readily self-assembled into monodisperse spheres with a mean diameter of 160 nm. Electronic supplementary information (ESI) available: Synthetic procedures and products' characterization (2-4 and 6-9). 1H NMR titration of compound 6 by Zn(OTf)2 to form complex 7. Kinetic measurements by UV-Vis-NIR spectroscopy. Transmission electron microscopy imaging for complexes 8 and 9. UV-Vis-NIR for an Fe2+ analogue of complex 7. Dynamic light scattering and time autocorrelation function for self-assembly of complexes 7-9. Copies of 1H and 13C NMR spectra for compounds 2-4 and 6. See DOI: 10.1039/c2nr32168h
Decoding Facial Expressions: A New Test with Decoding Norms.
ERIC Educational Resources Information Center
Leathers, Dale G.; Emigh, Ted H.
1980-01-01
Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)
Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.
Prins, Noeline W; Sanchez, Justin C; Prasad, Abhishek
2017-06-01
For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor's weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the 'ambiguous' region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.
Feedback for reinforcement learning based brain-machine interfaces using confidence metrics
NASA Astrophysics Data System (ADS)
Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek
2017-06-01
Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. Main results. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. Significance: The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.
Edge-Related Activity Is Not Necessary to Explain Orientation Decoding in Human Visual Cortex.
Wardle, Susan G; Ritchie, J Brendan; Seymour, Kiley; Carlson, Thomas A
2017-02-01
Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether "edge-related activity" underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding. Copyright © 2017 the authors 0270-6474/17/371187-10$15.00/0.
Tail Biting Trellis Representation of Codes: Decoding and Construction
NASA Technical Reports Server (NTRS)
Shao. Rose Y.; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.
Visual perception as retrospective Bayesian decoding from high- to low-level features.
Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning
2017-10-24
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.
Pattern-oriented modeling of agent-based complex systems: Lessons from ecology
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-01-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology
NASA Astrophysics Data System (ADS)
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-11-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Decoding and Encoding Facial Expressions in Preschool-Age Children.
ERIC Educational Resources Information Center
Zuckerman, Miron; Przewuzman, Sylvia J.
1979-01-01
Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…
Multichannel error correction code decoder
NASA Technical Reports Server (NTRS)
Wagner, Paul K.; Ivancic, William D.
1993-01-01
A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.
Colloidal alloys with preassembled clusters and spheres.
Ducrot, Étienne; He, Mingxin; Yi, Gi-Ra; Pine, David J
2017-06-01
Self-assembly is a powerful approach for constructing colloidal crystals, where spheres, rods or faceted particles can build up a myriad of structures. Nevertheless, many complex or low-coordination architectures, such as diamond, pyrochlore and other sought-after lattices, have eluded self-assembly. Here we introduce a new design principle based on preassembled components of the desired superstructure and programmed nearest-neighbour DNA-mediated interactions, which allows the formation of otherwise unattainable structures. We demonstrate the approach using preassembled colloidal tetrahedra and spheres, obtaining a class of colloidal superstructures, including cubic and tetragonal colloidal crystals, with no known atomic analogues, as well as percolating low-coordination diamond and pyrochlore sublattices never assembled before.
A software simulation study of a (255,223) Reed-Solomon encoder-decoder
NASA Technical Reports Server (NTRS)
Pollara, F.
1985-01-01
A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
HCV IRES domain IIb affects the configuration of coding RNA in the 40S subunit's decoding groove
Filbin, Megan E.; Kieft, Jeffrey S.
2011-01-01
Hepatitis C virus (HCV) uses a structured internal ribosome entry site (IRES) RNA to recruit the translation machinery to the viral RNA and begin protein synthesis without the ribosomal scanning process required for canonical translation initiation. Different IRES structural domains are used in this process, which begins with direct binding of the 40S ribosomal subunit to the IRES RNA and involves specific manipulation of the translational machinery. We have found that upon initial 40S subunit binding, the stem–loop domain of the IRES that contains the start codon unwinds and adopts a stable configuration within the subunit's decoding groove. This configuration depends on the sequence and structure of a different stem–loop domain (domain IIb) located far from the start codon in sequence, but spatially proximal in the IRES•40S complex. Mutation of domain IIb results in misconfiguration of the HCV RNA in the decoding groove that includes changes in the placement of the AUG start codon, and a substantial decrease in the ability of the IRES to initiate translation. Our results show that two distal regions of the IRES are structurally communicating at the initial step of 40S subunit binding and suggest that this is an important step in driving protein synthesis. PMID:21606179
HCV IRES domain IIb affects the configuration of coding RNA in the 40S subunit's decoding groove.
Filbin, Megan E; Kieft, Jeffrey S
2011-07-01
Hepatitis C virus (HCV) uses a structured internal ribosome entry site (IRES) RNA to recruit the translation machinery to the viral RNA and begin protein synthesis without the ribosomal scanning process required for canonical translation initiation. Different IRES structural domains are used in this process, which begins with direct binding of the 40S ribosomal subunit to the IRES RNA and involves specific manipulation of the translational machinery. We have found that upon initial 40S subunit binding, the stem-loop domain of the IRES that contains the start codon unwinds and adopts a stable configuration within the subunit's decoding groove. This configuration depends on the sequence and structure of a different stem-loop domain (domain IIb) located far from the start codon in sequence, but spatially proximal in the IRES•40S complex. Mutation of domain IIb results in misconfiguration of the HCV RNA in the decoding groove that includes changes in the placement of the AUG start codon, and a substantial decrease in the ability of the IRES to initiate translation. Our results show that two distal regions of the IRES are structurally communicating at the initial step of 40S subunit binding and suggest that this is an important step in driving protein synthesis.
Software package for performing experiments about the convolutionally encoded Voyager 1 link
NASA Technical Reports Server (NTRS)
Cheng, U.
1989-01-01
A software package enabling engineers to conduct experiments to determine the actual performance of long constraint-length convolutional codes over the Voyager 1 communication link directly from the Jet Propulsion Laboratory (JPL) has been developed. Using this software, engineers are able to enter test data from the Laboratory in Pasadena, California. The software encodes the data and then sends the encoded data to a personal computer (PC) at the Goldstone Deep Space Complex (GDSC) over telephone lines. The encoded data are sent to the transmitter by the PC at GDSC. The received data, after being echoed back by Voyager 1, are first sent to the PC at GDSC, and then are sent back to the PC at the Laboratory over telephone lines for decoding and further analysis. All of these operations are fully integrated and are completely automatic. Engineers can control the entire software system from the Laboratory. The software encoder and the hardware decoder interface were developed for other applications, and have been modified appropriately for integration into the system so that their existence is transparent to the users. This software provides: (1) data entry facilities, (2) communication protocol for telephone links, (3) data displaying facilities, (4) integration with the software encoder and the hardware decoder, and (5) control functions.
A comparative study of prebiotic and present day translational models
NASA Technical Reports Server (NTRS)
Rein, R.; Raghunathan, G.; Mcdonald, J.; Shibata, M.; Srinivasan, S.
1986-01-01
It is generally recognized that the understanding of the molecular basis of primitive translation is a fundamental step in developing a theory of the origin of life. However, even in modern molecular biology, the mechanism for the decoding of messenger RNA triplet codons into an amino acid sequence of a protein on the ribosome is understood incompletely. Most of the proposed models for prebiotic translation lack, not only experimental support, but also a careful theoretical scrutiny of their compatibility with well understood stereochemical and energetic principles of nucleic acid structure, molecular recognition principles, and the chemistry of peptide bond formation. Present studies are concerned with comparative structural modelling and mechanistic simulation of the decoding apparatus ranging from those proposed for prebiotic conditions to the ones involved in modern biology. Any primitive decoding machinery based on nucleic acids and proteins, and most likely the modern day system, has to satisfy certain geometrical constraints. The charged amino acyl and the peptidyl termini of successive adaptors have to be adjacent in space in order to satisfy the stereochemical requirements for amide bond formation. Simultaneously, the same adaptors have to recognize successive codons on the messenger. This translational complex has to be realized by components that obey nucleic acid conformational principles, stabilities, and specificities. This generalized condition greatly restricts the number of acceptable adaptor structures.
Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.
Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup
2011-09-01
The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.
Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo
2014-09-30
While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Khaĭrulina, Iu S; Molotkov, M V; Bulygin, K N; Graĭfer, D M; Ven'iaminova, A G; Karpova, G G
2008-01-01
Protein S15 is a characteristic component of the mammalian 80S ribosome that neighbors mRNA codon at the decoding site and the downstream triplets. In this study we determined S15 protein fragments located close to mRNA positions +4 to +12 with respect to the first nucleotide of the P site codon on the human ribosome. For cross-linking to ribosomal protein S15, a set of mRNA was used that contained triplet UUU/UUC at the 5'-termini and a perfluorophenyl azide-modified uridine in position 3' of this triplet. The locations of mRNA analogues on the ribosome were governed by tRNAPhe cognate to the UUU/UUC triplet targeted to the P site. Cross-linked S15 protein was isolated from the irradiated with mild UV light complexes of 80S ribosomes with tRNAPhe and mRNA analogues with subsequent cleavage with CNBr that splits polypeptide chain after methionines. Analysis of modified oligopeptides resulted from the cleavage revealed that in all cases cross-linking site was located in C-terminal fragment 111-145 of protein S15 indicating that this fragment is involved in formation of decoding site of the eukaryotic ribosome.
High data rate Reed-Solomon encoding and decoding using VLSI technology
NASA Technical Reports Server (NTRS)
Miller, Warner; Morakis, James
1987-01-01
Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.
The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?
Maloney, Ryan T
2015-01-01
Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.
Emotion Decoding and Incidental Processing Fluency as Antecedents of Attitude Certainty.
Petrocelli, John V; Whitmire, Melanie B
2017-07-01
Previous research demonstrates that attitude certainty influences the degree to which an attitude changes in response to persuasive appeals. In the current research, decoding emotions from facial expressions and incidental processing fluency, during attitude formation, are examined as antecedents of both attitude certainty and attitude change. In Experiment 1, participants who decoded anger or happiness during attitude formation expressed their greater attitude certainty, and showed more resistance to persuasion than participants who decoded sadness. By manipulating the emotion decoded, the diagnosticity of processing fluency experienced during emotion decoding, and the gaze direction of the social targets, Experiment 2 suggests that the link between emotion decoding and attitude certainty results from incidental processing fluency. Experiment 3 demonstrated that fluency in processing irrelevant stimuli influences attitude certainty, which in turn influences resistance to persuasion. Implications for appraisal-based accounts of attitude formation and attitude change are discussed.
ERIC Educational Resources Information Center
van Schalkwyk, Gerrit I.; Marin, Carla E.; Ortiz, Mayra; Rolison, Max; Qayyum, Zheala; McPartland, James C.; Lebowitz, Eli R.; Volkmar, Fred R.; Silverman, Wendy K.
2017-01-01
Social media holds promise as a technology to facilitate social engagement, but may displace offline social activities. Adolescents with ASD are well suited to capitalize on the unique features of social media, which requires less decoding of complex social information. In this cross-sectional study, we assessed social media use, anxiety and…
ERIC Educational Resources Information Center
Thomas-Tate, Shurita; Connor, Carol McDonald; Johnson, Lakeisha
2013-01-01
Reading comprehension, defined as the active extraction and construction of meaning from all kinds of text, requires children to fluently decode and understand what they are reading. Basic processes underlying reading comprehension are complex and call on the oral language system and a conscious understanding of this system, i.e., metalinguistic…
Reading Strategies in a L2: A Study on Machine Translation
ERIC Educational Resources Information Center
Karnal, Adriana Riess; Pereira, Vera Vanmacher
2015-01-01
This article aims at understanding cognitive strategies which are involved in reading academic texts in English as a L2/FL. Specifically, we focus on reading comprehension when a text is read either using Google translator or not. From this perspective we must consider the reading process in its complexity not only as a decoding process. We follow…
Decoding Children's Expressions of Affect.
ERIC Educational Resources Information Center
Feinman, Joel A.; Feldman, Robert S.
Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…
Decoding Area Studies and Interdisciplinary Majors: Building a Framework for Entry-Level Students
ERIC Educational Resources Information Center
MacPherson, Kristina Ruth
2015-01-01
Decoding disciplinary expertise for novices is increasingly part of the undergraduate curriculum. But how might area studies and other interdisciplinary programs, which require integration of courses from multiple disciplines, decode expertise in a similar fashion? Additionally, as a part of decoding area studies and interdisciplines, how might a…
47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...
47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention Signal encoder and decoder equipment type accepted for use as Emergency Broadcast System equipment under...
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Soltani, Amanallah; Roslan, Samsilah
2013-03-01
Reading decoding ability is a fundamental skill to acquire word-specific orthographic information necessary for skilled reading. Decoding ability and its underlying phonological processing skills have been heavily investigated typically among developing students. However, the issue has rarely been noticed among students with intellectual disability who commonly suffer from reading decoding problems. This study is aimed at determining the contributions of phonological awareness, phonological short-term memory, and rapid automated naming, as three well known phonological processing skills, to decoding ability among 60 participants with mild intellectual disability of unspecified origin ranging from 15 to 23 years old. The results of the correlation analysis revealed that all three aspects of phonological processing are significantly correlated with decoding ability. Furthermore, a series of hierarchical regression analysis indicated that after controlling the effect of IQ, phonological awareness, and rapid automated naming are two distinct sources of decoding ability, but phonological short-term memory significantly contributes to decoding ability under the realm of phonological awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Mapping the coupled role of structure and materials in mechanics of platelet-matrix composites
NASA Astrophysics Data System (ADS)
Farzanian, Shafee; Shahsavari, Rouzbeh
2018-03-01
Despite significant progresses on understanding and mimicking the delicate nano/microstructure of biomaterials such as nacre, decoding the indistinguishable merger of materials and structures in controlling the tradeoff in mechanical properties has been long an engineering pursuit. Herein, we focus on an archetype platelet-matrix composite and perform ∼400 nonlinear finite element simulations to decode the complex interplay between various structural features and material characteristics in conferring the balance of mechanical properties. We study various combinatorial models expressed by four key dimensionless parameters, i.e. characteristic platelet length, matrix plasticity, platelet dissimilarity, and overlap offset, whose effects are all condensed in a new unifying parameter, defined as the multiplication of strength, toughness, and stiffness over composite volume. This parameter, which maximizes at a critical characteristic length, controls the transition from intrinsic toughening (matrix plasticity driven without crack growths) to extrinsic toughening phenomena involving progressive crack propagations. This finding, combined with various abstract volumetric and radar plots, will not only shed light on decoupling the complex role of structure and materials on mechanical performance and their trends, but provides important guidelines for designing lightweight staggered platelet-matrix composites while ensuring the best (balance) of their mechanical properties.
Grasp movement decoding from premotor and parietal cortex.
Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg
2011-10-05
Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Protecting the proteome: Eukaryotic cotranslational quality control pathways
2014-01-01
The correct decoding of messenger RNAs (mRNAs) into proteins is an essential cellular task. The translational process is monitored by several quality control (QC) mechanisms that recognize defective translation complexes in which ribosomes are stalled on substrate mRNAs. Stalled translation complexes occur when defects in the mRNA template, the translation machinery, or the nascent polypeptide arrest the ribosome during translation elongation or termination. These QC events promote the disassembly of the stalled translation complex and the recycling and/or degradation of the individual mRNA, ribosomal, and/or nascent polypeptide components, thereby clearing the cell of improper translation products and defective components of the translation machinery. PMID:24535822
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
NASA Technical Reports Server (NTRS)
Lahmeyer, Charles R. (Inventor)
1987-01-01
A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
Locating and decoding barcodes in fuzzy images captured by smart phones
NASA Astrophysics Data System (ADS)
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Qiu, Cheng-Wei; Li, Le-Wei; Yeo, Tat-Soon; Zouhdi, Saïd
2007-02-01
Vector potential formulation and parametric studies of electromagnetic scattering problems of a sphere characterized by the rotationally symmetric anisotropy are studied. Both epsilon and mu tensors are considered herein, and four elementary parameters are utilized to specify the material properties in the structure. The field representations can be obtained in terms of two potentials, and both TE (TM) modes (with respect to r) inside (outside) the sphere can be derived and expressed in terms of a series of fractional-order (in a real or complex number) Ricatti-Bessel functions. The effects due to either electric anisotropy ratio (Ae=epsilont/epsilonr) or magnetic anisotropy ratio (Am=mut/mur) on the radar cross section (RCS) are considered, and the hybrid effects due to both Ae and Am are also examined extensively. It is found that the material anisotropy affects significantly the scattering behaviors of three-dimensional dielectric objects. For absorbing spheres, however, the Ae or Am no longer plays a significant role as in lossless dielectric spheres and the anisotropic dependence of RCS values is found to be predictable. The hybrid effects of Ae and Am are considered for absorbing spheres as well, but it is found that the RCS can be greatly reduced by controlling the material parameters. Details of the theoretical treatment and numerical results are presented.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
NASA Astrophysics Data System (ADS)
Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad
2016-09-01
Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.
Frame Synchronization Without Attached Sync Markers
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2011-01-01
We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).
CBL-CIPK network for calcium signaling in higher plants
NASA Astrophysics Data System (ADS)
Luan, Sheng
Plants sense their environment by signaling mechanisms involving calcium. Calcium signals are encoded by a complex set of parameters and decoded by a large number of proteins including the more recently discovered CBL-CIPK network. The calcium-binding CBL proteins specifi-cally interact with a family of protein kinases CIPKs and regulate the activity and subcellular localization of these kinases, leading to the modification of kinase substrates. This represents a paradigm shift as compared to a calcium signaling mechanism from yeast and animals. One example of CBL-CIPK signaling pathways is the low-potassium response of Arabidopsis roots. When grown in low-K medium, plants develop stronger K-uptake capacity adapting to the low-K condition. Recent studies show that the increased K-uptake is caused by activation of a specific K-channel by the CBL-CIPK network. A working model for this regulatory pathway will be discussed in the context of calcium coding and decoding processes.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
NASA Astrophysics Data System (ADS)
Palma, V.; Carli, M.; Neri, A.
2011-02-01
In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.
Kubicki, James D; Halada, Gary P; Jha, Prashant; Phillips, Brian L
2009-01-01
Background Quantum mechanical calculations were performed on a variety of uranium species representing U(VI), U(V), U(IV), U-carbonates, U-phosphates, U-oxalates, U-catecholates, U-phosphodiesters, U-phosphorylated N-acetyl-glucosamine (NAG), and U-2-Keto-3-doxyoctanoate (KDO) with explicit solvation by H2O molecules. These models represent major U species in natural waters and complexes on bacterial surfaces. The model results are compared to observed EXAFS, IR, Raman and NMR spectra. Results Agreement between experiment and theory is acceptable in most cases, and the reasons for discrepancies are discussed. Calculated Gibbs free energies are used to constrain which configurations are most likely to be stable under circumneutral pH conditions. Reduction of U(VI) to U(IV) is examined for the U-carbonate and U-catechol complexes. Conclusion Results on the potential energy differences between U(V)- and U(IV)-carbonate complexes suggest that the cause of slower disproportionation in this system is electrostatic repulsion between UO2 [CO3]35- ions that must approach one another to form U(VI) and U(IV) rather than a change in thermodynamic stability. Calculations on U-catechol species are consistent with the observation that UO22+ can oxidize catechol and form quinone-like species. In addition, outer-sphere complexation is predicted to be the most stable for U-catechol interactions based on calculated energies and comparison to 13C NMR spectra. Outer-sphere complexes (i.e., ion pairs bridged by water molecules) are predicted to be comparable in Gibbs free energy to inner-sphere complexes for a model carboxylic acid. Complexation of uranyl to phosphorus-containing groups in extracellular polymeric substances is predicted to favor phosphonate groups, such as that found in phosphorylated NAG, rather than phosphodiesters, such as those in nucleic acids. PMID:19689800
Radar Imaging of Spheres in 3D using MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H; Berryman, J G
2003-01-21
We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between directmore » and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Capolino, Filippo
In this study, we investigate the effect on wave propagation of array packing and electromagnetic coupling between spheres in a three-dimensional (3D) lattice of microspheres with large permittivity that exhibit strong magnetic polarizability. We report on the complex wavenumber of Bloch waves in the lattice when each sphere is assumed to possess both electric and magnetic dipoles and full electromagnetic coupling is accounted for. While for small material-filling fractions we always determine one dominant mode with low attenuation constant, the same does not happen for large filling fractions, when electromagnetic coupling is included. In the latter case we peculiarly observemore » two dominant modes with low attenuation constant, dominant in different frequency ranges. The filling fraction threshold for which two dominant modes appear varies for different metamaterial constituents, as proven by considering spheres made by either titanium dioxide or lead telluride. As further confirmation of our findings, we retrieve the complex propagation constant of the dominant mode(s) via a field fitting procedure employing two sets of waves (direct and reflected) pertaining to two distinct modes, strengthening the presence of the two distinct dominant modes for increasing filling fractions. However, given that one mode only, with transverse polarization, at any given frequency, is dominant and able to propagate inside the lattice, we are able to accurately treat the metamaterial that is known to exhibit artificial magnetism as a homogeneous material with effective parameters, such as the refractive index. Results clearly show that the account of both electric and magnetic scattering processes in evaluating all electromagnetic intersphere couplings is essential for a proper description of the electromagnetic propagation in lattices.« less
Campione, Salvatore; Capolino, Filippo
2016-01-25
In this study, we investigate the effect on wave propagation of array packing and electromagnetic coupling between spheres in a three-dimensional (3D) lattice of microspheres with large permittivity that exhibit strong magnetic polarizability. We report on the complex wavenumber of Bloch waves in the lattice when each sphere is assumed to possess both electric and magnetic dipoles and full electromagnetic coupling is accounted for. While for small material-filling fractions we always determine one dominant mode with low attenuation constant, the same does not happen for large filling fractions, when electromagnetic coupling is included. In the latter case we peculiarly observemore » two dominant modes with low attenuation constant, dominant in different frequency ranges. The filling fraction threshold for which two dominant modes appear varies for different metamaterial constituents, as proven by considering spheres made by either titanium dioxide or lead telluride. As further confirmation of our findings, we retrieve the complex propagation constant of the dominant mode(s) via a field fitting procedure employing two sets of waves (direct and reflected) pertaining to two distinct modes, strengthening the presence of the two distinct dominant modes for increasing filling fractions. However, given that one mode only, with transverse polarization, at any given frequency, is dominant and able to propagate inside the lattice, we are able to accurately treat the metamaterial that is known to exhibit artificial magnetism as a homogeneous material with effective parameters, such as the refractive index. Results clearly show that the account of both electric and magnetic scattering processes in evaluating all electromagnetic intersphere couplings is essential for a proper description of the electromagnetic propagation in lattices.« less
Gentle, A R; Smith, G B
2014-10-20
Accurate solar and visual transmittances of materials in which surfaces or internal structures are complex are often not easily amenable to standard procedures with laboratory-based spectrophotometers and integrating spheres. Localized "hot spots" of intensity are common in such materials, so data on small samples is unreliable. A novel device and simple protocols have been developed and undergone validation testing. Simultaneous solar and visible transmittance and reflectance data have been acquired for skylight components and multilayer polycarbonate roof panels. The pyranometer and lux sensor setups also directly yield "light coolness" in lumens/watt. Sample areas must be large, and, although mainly in sheet form, some testing has been done on curved panels. The instrument, its operation, and the simple calculations used are described. Results on a subset of diffuse and partially diffuse materials with no hot spots have been cross checked using 150 mm integrating spheres with a spectrophotometer and the Air Mass 1.5 spectrum. Indications are that results are as good or better than with such spheres for transmittance, but reflectance techniques need refinement for some sample types.
NASA Astrophysics Data System (ADS)
Sumlin, Benjamin J.; Heinson, William R.; Chakrabarty, Rajan K.
2018-01-01
The complex refractive index m = n + ik of a particle is an intrinsic property which cannot be directly measured; it must be inferred from its extrinsic properties such as the scattering and absorption cross-sections. Bohren and Huffman called this approach "describing the dragon from its tracks", since the inversion of Lorenz-Mie theory equations is intractable without the use of computers. This article describes PyMieScatt, an open-source module for Python that contains functionality for solving the inverse problem for complex m using extensive optical and physical properties as input, and calculating regions where valid solutions may exist within the error bounds of laboratory measurements. Additionally, the module has comprehensive capabilities for studying homogeneous and coated single spheres, as well as ensembles of homogeneous spheres with user-defined size distributions, making it a complete tool for studying the optical behavior of spherical particles.
Surface complexation model for multisite adsorption of copper(II) onto kaolinite
NASA Astrophysics Data System (ADS)
Peacock, Caroline L.; Sherman, David M.
2005-08-01
We measured the adsorption of Cu(II) onto kaolinite from pH 3-7 at constant ionic strength. EXAFS spectra show that Cu(II) adsorbs as (CuO 4H n) n-6 and binuclear (Cu 2O 6H n) n-8 inner-sphere complexes on variable-charge ≡AlOH sites and as Cu 2+ on ion exchangeable ≡X-H + sites. Sorption isotherms and EXAFS spectra show that surface precipitates have not formed at least up to pH 6.5. Inner-sphere complexes are bound to the kaolinite surface by corner-sharing with two or three edge-sharing Al(O,OH) 6 polyhedra. Our interpretation of the EXAFS data are supported by ab initio (density functional theory) geometries of analog clusters simulating Cu complexes on the {110} and {010} crystal edges and at the ditrigonal cavity sites on the {001}. Having identified the bidentate (≡AlOH) 2Cu(OH) 20, tridentate (≡Al 3O(OH) 2)Cu 2(OH) 30 and ≡X-Cu 2+ surface complexes, the experimental copper(II) adsorption data can be fit to the reactions
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
NASA Astrophysics Data System (ADS)
Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka
2012-06-01
Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Fast transform decoding of nonsystematic Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.
1989-01-01
A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.
Polar Coding with CRC-Aided List Decoding
2015-08-01
TECHNICAL REPORT 2087 August 2015 Polar Coding with CRC-Aided List Decoding David Wasserman Approved...list decoding . RESULTS Our simulation results show that polar coding can produce results very similar to the FEC used in the Digital Video...standard. RECOMMENDATIONS In any application for which the DVB-S2 FEC is considered, polar coding with CRC-aided list decod - ing with N = 65536
Decoding position, velocity, or goal: does it matter for brain-machine interfaces?
Marathe, A R; Taylor, D M
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Decoding position, velocity, or goal: Does it matter for brain-machine interfaces?
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Improved HDRG decoders for qudit and non-Abelian quantum error correction
NASA Astrophysics Data System (ADS)
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
NASA Astrophysics Data System (ADS)
Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun
2014-07-01
A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.
Welding at the Kennedy Space Center.
NASA Technical Reports Server (NTRS)
Clautice, W. E.
1973-01-01
Brief description of the nature of the mechanical equipment at a space launch complex from a welding viewpoint. including an identification of the major welding applications used in the construction of this complex. The role played by welding in the ground support equipment is noted, including the welded structures and systems required in the vehicle assembly building, the mobile launchers, transporters, mobile service structure, launch pad and launch site, the propellants system, the pneumatics system, and the environmental control system. The welding processes used at the Kennedy Space Center are reviewed, and a particularly detailed account is given of the design and fabrication of the liquid hydrogen and liquid oxygen storage spheres and piping. Finally, the various methods of testing and inspecting the storage spheres are cited.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert
2013-01-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294
Jones, Michael N.
2017-01-01
A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A; Plomin, Robert
2012-08-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years.
Soft-output decoding algorithms in iterative decoding of turbo codes
NASA Technical Reports Server (NTRS)
Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.
1996-01-01
In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
A Bridge to Coordination Isomer Selection in Lanthanide(III) DOTA-tetraamide Complexes
Vipond, Jeff; Woods, Mark; Zhao, Piyu; Tircso, Gyula; Ren, Jimin; Bott, Simon G.; Ogrin, Doug; Kiefer, Garry E.; Kovacs, Zoltan; Sherry, A.Dean
2008-01-01
Interest in macrocyclic lanthanide complexes such as DOTA is driven largely through interest in their use as contrast agents for MRI. The lanthanide tetraamide derivatives of DOTA have shown considerable promise as PARACEST agents, taking advantage of the slow water exchange kinetics of this class of complex. We postulated that water exchange in these tetraamide complexes could be slowed even further by introducing a group to sterically encumber the space above the water coordination site, thereby hindering the departure and approach of water molecules to the complex. The ligand 8O2-bridged-DOTAM was synthesized in a 34% yield from cyclen. It was found that the lanthanide complexes of this ligand did not possess a water molecule in the inner coordination sphere of the bound lanthanide. The crystal structure of the ytterbium complex revealed that distortions to the coordination sphere were induced by the steric constraints imposed on the complex by the bridging unit. The extent of the distortion was found to increase with increasing ionic radius of the lanthanide ion, eventually resulting in a complete loss of symmetry in the complex. Because this ligand system is bicyclic, the conformation of each ring in the system is constrained by that of the other, in consequence inclusion of the bridging unit in the complexes means only a twisted square antiprismatic coordination geometry is observed for complexes of 8O2-bridged-DOTAM. PMID:17295475
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
Decoding complex flow-field patterns in visual working memory.
Christophel, Thomas B; Haynes, John-Dylan
2014-05-01
There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Academy for Educational Development, 2007
2007-01-01
An area of particular concern in adolescent literacy is comprehension of informational text: many students can successfully decode words without actually being able to understand the texts they read. As they progress through school, they have to read increasingly complex texts but receive little if any explicit instruction to help them. Beyond the…
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1973-01-01
A convolutional coding theory is given for the IME and the Heliocentric spacecraft. The amount of coding gain needed by the mission is determined. Recommendations are given for an encoder/decoder system to provide the gain along with an evaluation of the impact of the system on the space network in terms of costs and complexity.
Complete Decoding and Reporting of Aviation Routine Weather Reports (METARs)
NASA Technical Reports Server (NTRS)
Lui, Man-Cheung Max
2014-01-01
Aviation Routine Weather Report (METAR) provides surface weather information at and around observation stations, including airport terminals. These weather observations are used by pilots for flight planning and by air traffic service providers for managing departure and arrival flights. The METARs are also an important source of weather data for Air Traffic Management (ATM) analysts and researchers at NASA and elsewhere. These researchers use METAR to correlate severe weather events with local or national air traffic actions that restrict air traffic, as one example. A METAR is made up of multiple groups of coded text, each with a specific standard coding format. These groups of coded text are located in two sections of a report: Body and Remarks. The coded text groups in a U.S. METAR are intended to follow the coding standards set by National Oceanic and Atmospheric Administration (NOAA). However, manual data entry and edits made by a human report observer may result in coded text elements that do not follow the standards, especially in the Remarks section. And contrary to the standards, some significant weather observations are noted only in the Remarks section and not in the Body section of the reports. While human readers can infer the intended meaning of non-standard coding of weather conditions, doing so with a computer program is far more challenging. However such programmatic pre-processing is necessary to enable efficient and faster database query when researchers need to perform any significant historical weather analysis. Therefore, to support such analysis, a computer algorithm was developed to identify groups of coded text anywhere in a report and to perform subsequent decoding in software. The algorithm considers common deviations from the standards and data entry mistakes made by observers. The implemented software code was tested to decode 12 million reports and the decoding process was able to completely interpret 99.93 of the reports. This document presents the deviations from the standards and the decoding algorithm. Storing all decoded data in a database allows users to quickly query a large amount of data and to perform data mining on the data. Users can specify complex query criteria not only on date or airport but also on weather condition. This document also describes the design of a database schema for storing the decoded data, and a Data Warehouse web application that allows users to perform reporting and analysis on the decoded data. Finally, this document presents a case study correlating dust storms reported in METARs from the Phoenix International airport with Ground Stops issued by Air Route Traffic Control Centers (ATCSCC). Blowing widespread dust is one of the weather conditions when dust storm occurs. By querying the database, 294 METARs were found to report blowing widespread dust at the Phoenix airport and 41 of them reported such condition only in the Remarks section of the reports. When METAR is a data source for an ATM research, it is important to include weather conditions not only from the Body section but also from the Remarks section of METARs.
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
Coding/decoding two-dimensional images with orbital angular momentum of light.
Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping
2016-04-01
We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.
2016-07-06
The work reported in this paper is a part of on-going studies to clarify how and to what extent soil electromagnetic properties affect the...metallic sphere buried in a non-conducting soil half-space with frequency-dependent complex magnetic susceptibility. The sphere is chosen as a simple...prototype for the small metal parts in low-metal landmines, while soil with dispersive magnetic susceptibility is a good model for some soils that are
Magnetic zero-modes, vortices and Cartan geometry
NASA Astrophysics Data System (ADS)
Ross, Calum; Schroers, Bernd J.
2018-04-01
We exhibit a close relation between vortex configurations on the 2-sphere and magnetic zero-modes of the Dirac operator on R^3 which obey an additional nonlinear equation. We show that both are best understood in terms of the geometry induced on the 3-sphere via pull-back of the round geometry with bundle maps of the Hopf fibration. We use this viewpoint to deduce a manifestly smooth formula for square-integrable magnetic zero-modes in terms of two homogeneous polynomials in two complex variables.
To sort or not to sort: the impact of spike-sorting on neural decoding performance.
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
To sort or not to sort: the impact of spike-sorting on neural decoding performance
NASA Astrophysics Data System (ADS)
Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie
2014-10-01
Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.
Code of Federal Regulations, 2011 CFR
2011-10-01
... time periods expire. (4) Display and logging. A visual message shall be developed from any valid header... input. (8) Decoder Programming. Access to decoder programming shall be protected by a lock or other...
Sayer, Alon Haim; Blum, Eliav; Major, Dan Thomas; Vardi-Kilshtain, Alexandra; Levi Hevroni, Bosmat; Fischer, Bilha
2015-04-28
Although involved in various physiological functions, nucleoside bis-phosphate analogues and their metal-ion complexes have been scarcely studied. Hence, here, we explored the solution conformation of 2′-deoxyadenosine- and 2′-deoxyguanosine-3′,5′-bisphosphates, 3 and 4, d(pNp), as well as their Zn(2+)/Mg(2+) binding sites and binding-modes (i.e. inner- vs. outer-sphere coordination), acidity constants, stability constants of their Zn(2+)/Mg(2+) complexes, and their species distribution. Analogues 3 and 4, in solution, adopted a predominant Southern ribose conformer (ca. 84%), gg conformation around C4'-C5' and C5'-O5' bonds, and glycosidic angle in the anti-region (213-270°). (1)H- and (31)P-NMR experiments indicated that Zn(2+)/Mg(2+) ions coordinated to P5' and P3' groups of 3 and 4 but not to N7 nitrogen atom. Analogues 3 and 4 formed ca. 100-fold more stable complexes with Zn(2+)vs. Mg(2+)-ions. Complexes of 3 and 4 with Mg(2+) at physiological pH were formed in minute amounts (11% and 8%, respectively) vs. Zn(2+) complexes (46% and 44%). Stability constants of Zn(2+)/Mg(2+) complexes of analogues 3 and 4 (log KML(M) = 4.65-4.75/2.63-2.79, respectively) were similar to those of the corresponding complexes of ADP and GDP (log KML(M) = 4.72-5.10/2.95-3.16, respectively). Based on the above findings, we hypothesized that the unexpectedly low log K values of Zn(2+)-d(pNp) as compared to Zn(2+)-NDP complexes, are possibly due to formation of outer-sphere coordination in the Zn(2+)-d(pNp) complex vs. inner-sphere in the NDP-Zn(2+) complex, in addition to loss of chelation to N7 nitrogen atom in Zn(2+)-d(pNp). Indeed, explicit solvent molecular dynamics simulations of 1 and 3 for 100 ns supported this hypothesis.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-01-01
Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.
Visual coding with a population of direction-selective neurons.
Fiscella, Michele; Franke, Felix; Farrow, Karl; Müller, Jan; Roska, Botond; da Silveira, Rava Azeredo; Hierlemann, Andreas
2015-10-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. Copyright © 2015 the American Physiological Society.
Visual coding with a population of direction-selective neurons
Farrow, Karl; Müller, Jan; Roska, Botond; Azeredo da Silveira, Rava; Hierlemann, Andreas
2015-01-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. PMID:26289471
Absorption and scattering by fractal aggregates and by their equivalent coated spheres
NASA Astrophysics Data System (ADS)
Kandilian, Razmig; Heng, Ri-Liang; Pilon, Laurent
2015-01-01
This paper demonstrates that the absorption and scattering cross-sections and the asymmetry factor of randomly oriented fractal aggregates of spherical monomers can be rapidly estimated as those of coated spheres with equivalent volume and average projected area. This was established for fractal aggregates with fractal dimension ranging from 2.0 to 3.0 and composed of up to 1000 monodisperse or polydisperse monomers with a wide range of size parameter and relative complex index of refraction. This equivalent coated sphere approximation was able to capture the effects of both multiple scattering and shading among constituent monomers on the integral radiation characteristics of the aggregates. It was shown to be superior to the Rayleigh-Debye-Gans approximation and to the equivalent coated sphere approximation proposed by Latimer. However, the scattering matrix element ratios of equivalent coated spheres featured large angular oscillations caused by internal reflection in the coating which were not observed in those of the corresponding fractal aggregates. Finally, the scattering phase function and the scattering matrix elements of aggregates with large monomer size parameter were found to have unique features that could be used in remote sensing applications.
NASA Astrophysics Data System (ADS)
Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen
2017-06-01
This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
Decoding the Semantic Content of Natural Movies from Human Brain Activity
Huth, Alexander G.; Lee, Tyler; Nishimoto, Shinji; Bilenko, Natalia Y.; Vu, An T.; Gallant, Jack L.
2016-01-01
One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI. PMID:27781035
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
State-space decoding of primary afferent neuron firing rates
NASA Astrophysics Data System (ADS)
Wagenaar, J. B.; Ventura, V.; Weber, D. J.
2011-02-01
Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.
Utilizing sensory prediction errors for movement intention decoding: A new methodology
Nakamura, Keigo; Ando, Hideyuki
2018-01-01
We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI
Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer
2016-01-01
There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673
ERIC Educational Resources Information Center
Steacy, Laura M.; Elleman, Amy M.; Lovett, Maureen W.; Compton, Donald L.
2016-01-01
In English, gains in decoding skill do not map directly onto increases in word reading. However, beyond the Self-Teaching Hypothesis, little is known about the transfer of decoding skills to word reading. In this study, we offer a new approach to testing specific decoding elements on transfer to word reading. To illustrate, we modeled word-reading…
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Trapped in the coordination sphere: Nitrate ion transfer driven by the cerium(III/IV) redox couple
Ellis, Ross J.; Bera, Mrinal K.; Reinhart, Benjamin; ...
2016-11-07
Redox-driven ion transfer between phases underpins many biological and technological processes, including industrial separation of ions. Here we investigate the electrochemical transfer of nitrate anions between oil and water phases, driven by the reduction and oxidation of cerium coordination complexes in oil phases. We find that the coordination environment around the cerium cation has a pronounced impact on the overall redox potential, particularly with regard to the number of coordinated nitrate anions. Our results suggest a new fundamental mechanism for tuning ion transfer between phases; by 'trapping' the migrating ion inside the coordination sphere of a redox-active complex. Here, thismore » presents a new route for controlling anion transfer in electrochemically-driven separation applications.« less
Postnikova, G B; Shekhovtsova, E A
2016-12-01
In addition to reversible O2 binding, respiratory proteins of the globin family, hemoglobin (Hb) and myoglobin (Mb), participate in redox reactions with various metal complexes, including biologically significant ones, such as those of copper and iron. HbO 2 and MbO 2 are present in cells in large amounts and, as redox agents, can contribute to maintaining cell redox state and resisting oxidative stress. Divalent copper complexes with high redox potentials (E 0 , 200-600 mV) and high stability constants, such as [Cu(phen) 2 ] 2+ , [Cu(dmphen) 2 ] 2+ , and CuDTA oxidize ferrous heme proteins by the simple outer-sphere electron transfer mechanism through overlapping π-orbitals of the heme and the copper complex. Weaker oxidants, such as Cu2+, CuEDTA, CuNTA, CuCit, CuATP, and CuHis (E 0 ≤ 100-150 mV) react with HbO 2 and MbO 2 through preliminary binding to the protein with substitution of the metal ligands with protein groups and subsequent intramolecular electron transfer in the complex (the site-specific outer-sphere electron transfer mechanism). Oxidation of HbO 2 and MbO 2 by potassium ferricyanide and Fe(3) complexes with NTA, EDTA, CDTA, ATP, 2,3-DPG, citrate, and pyrophosphate PP i proceeds mainly through the simple outer-sphere electron transfer mechanism via the exposed heme edge. According to Marcus theory, the rate of this reaction correlates with the difference in redox potentials of the reagents and their self-exchange rates. For charged reagents, the reaction may be preceded by their nonspecific binding to the protein due to electrostatic interactions. The reactions of LbO 2 with carboxylate Fe complexes, unlike its reactions with ferricyanide, occur via the site-specific outer-sphere electron transfer mechanism, even though the same reagents oxidize structurally similar MbO 2 and cytochrome b 5 via the simple outer-sphere electron transfer mechanism. Of particular biological interest is HbO 2 and MbO 2 transformation into met-forms in the presence of small amounts of metal ions or complexes (catalysis), which, until recently, had been demonstrated only for copper compounds with intermediate redox potentials. The main contribution to the reaction rate comes from copper binding to the "inner" histidines, His97 (0.66 nm from the heme) that forms a hydrogen bond with the heme propionate COO - group, and the distal His64. The affinity of both histidines for copper is much lower than that of the surface histidines residues, and they are inaccessible for modification with chemical reagents. However, it was found recently that the high-potential Fe(3) complex, potassium ferricyanide (400 mV), at a 5 to 20% of molar protein concentration can be an efficient catalyst of MbO 2 oxidation into metMb. The catalytic process includes binding of ferrocyanide anion in the region of the His119 residue due to the presence there of a large positive local electrostatic potential and existence of a "pocket" formed by Lys16, Ala19, Asp20, and Arg118 that is sufficient to accommodate [Fe(CN) 6 ] 4- . Fast, proton-assisted reoxidation of the bound ferrocyanide by oxygen (which is required for completion of the catalytic cycle), unlike slow [Fe(CN) 6 ] 4- oxidation in solution, is provided by the optimal location of neighboring protonated His113 and His116, as it occurs in the enzyme active site.
Peter, Beate
2018-01-01
In a companion study, adults with dyslexia and adults with a probable history of childhood apraxia of speech showed evidence of difficulty with processing sequential information during nonword repetition, multisyllabic real word repetition and nonword decoding. Results suggested that some errors arose in visual encoding during nonword reading, all levels of processing but especially short-term memory storage/retrieval during nonword repetition, and motor planning and programming during complex real word repetition. To further investigate the role of short-term memory, a participant with short-term memory impairment (MI) was recruited. MI was confirmed with poor performance during a sentence repetition and three nonword repetition tasks, all of which have a high short-term memory load, whereas typical performance was observed during tests of reading, spelling, and static verbal knowledge, all with low short-term memory loads. Experimental results show error-free performance during multisyllabic real word repetition but high counts of sequence errors, especially migrations and assimilations, during nonword repetition, supporting short-term memory as a locus of sequential processing deficit during nonword repetition. Results are also consistent with the hypothesis that during complex real word repetition, short-term memory is bypassed as the word is recognized and retrieved from long-term memory prior to producing the word.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
Multiformat decoder for a DSP-based IP set-top box
NASA Astrophysics Data System (ADS)
Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.
2007-05-01
Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.
NASA Astrophysics Data System (ADS)
Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas
2013-09-01
The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
NASA Astrophysics Data System (ADS)
Luo, Hong-Wei; Chen, Jie-Jie; Sheng, Guo-Ping; Su, Ji-Hu; Wei, Shi-Qiang; Yu, Han-Qing
2014-11-01
Interactions between metals and activated sludge microorganisms substantially affect the speciation, immobilization, transport, and bioavailability of trace heavy metals in biological wastewater treatment plants. In this study, the interaction of Cu(II), a typical heavy metal, onto activated sludge microorganisms was studied in-depth using a multi-technique approach. The complexing structure of Cu(II) on microbial surface was revealed by X-ray absorption fine structure (XAFS) and electron paramagnetic resonance (EPR) analysis. EPR spectra indicated that Cu(II) was held in inner-sphere surface complexes of octahedral coordination with tetragonal distortion of axial elongation. XAFS analysis further suggested that the surface complexation between Cu(II) and microbial cells was the distorted inner-sphere coordinated octahedra containing four short equatorial bonds and two elongated axial bonds. To further validate the results obtained from the XAFS and EPR analysis, density functional theory calculations were carried out to explore the structural geometry of the Cu complexes. These results are useful to better understand the speciation, immobilization, transport, and bioavailability of metals in biological wastewater treatment plants.
Miniaturization of flight deflection measurement system
NASA Technical Reports Server (NTRS)
Fodale, Robert (Inventor); Hampton, Herbert R. (Inventor)
1990-01-01
A flight deflection measurement system is disclosed including a hybrid microchip of a receiver/decoder. The hybrid microchip decoder is mounted piggy back on the miniaturized receiver and forms an integral unit therewith. The flight deflection measurement system employing the miniaturized receiver/decoder can be used in a wind tunnel. In particular, the miniaturized receiver/decoder can be employed in a spin measurement system due to its small size and can retain already established control surface actuation functions.
Overview of Decoding across the Disciplines
ERIC Educational Resources Information Center
Boman, Jennifer; Currie, Genevieve; MacDonald, Ron; Miller-Young, Janice; Yeo, Michelle; Zettel, Stephanie
2017-01-01
In this chapter we describe the Decoding the Disciplines Faculty Learning Community at Mount Royal University and how Decoding has been used in new and multidisciplinary ways in the various teaching, curriculum, and research projects that are presented in detail in subsequent chapters.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
NASA Astrophysics Data System (ADS)
Choi, Hoseok; Lee, Jeyeon; Park, Jinsick; Lee, Seho; Ahn, Kyoung-ha; Kim, In Young; Lee, Kyoung-Min; Jang, Dong Pyo
2018-02-01
Objective. In arm movement BCIs (brain-computer interfaces), unimanual research has been much more extensively studied than its bimanual counterpart. However, it is well known that the bimanual brain state is different from the unimanual one. Conventional methodology used in unimanual studies does not take the brain stage into consideration, and therefore appears to be insufficient for decoding bimanual movements. In this paper, we propose the use of a two-staged (effector-then-trajectory) decoder, which combines the classification of movement conditions and uses a hand trajectory predicting algorithm for unimanual and bimanual movements, for application in real-world BCIs. Approach. Two micro-electrode patches (32 channels) were inserted over the dura mater of the left and right hemispheres of two rhesus monkeys, covering the motor related cortex for epidural electrocorticograph (ECoG). Six motion sensors (inertial measurement unit) were used to record the movement signals. The monkeys performed three types of arm movement tasks: left unimanual, right unimanual, bimanual. To decode these movements, we used a two-staged decoder, which combines the effector classifier for four states (left unimanual, right unimanual, bimanual movements, and stationary state) and movement predictor using regression. Main results. Using this approach, we successfully decoded both arm positions using the proposed decoder. The results showed that decoding performance for bimanual movements were improved compared to the conventional method, which does not consider the effector, and the decoding performance was significant and stable over a period of four months. In addition, we also demonstrated the feasibility of epidural ECoG signals, which provided an adequate level of decoding accuracy. Significance. These results provide evidence that brain signals are different depending on the movement conditions or effectors. Thus, the two-staged method could be useful if BCIs are used to generalize for both unimanual and bimanual operations in human applications and in various neuro-prosthetics fields.
Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter
Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.
2016-01-01
Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
Influence of incident angle on the decoding in laser polarization encoding guidance
NASA Astrophysics Data System (ADS)
Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan
2009-07-01
Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.
Online decoding of object-based attention using real-time fMRI.
Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J
2014-01-01
Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Building Bridges from the Decoding Interview to Teaching Practice
ERIC Educational Resources Information Center
Pettit, Jennifer; Rathburn, Melanie; Calvert, Victoria; Lexier, Roberta; Underwood, Margot; Gleeson, Judy; Dean, Yasmin
2017-01-01
This chapter describes a multidisciplinary faculty self-study about reciprocity in service-learning. The study began with each coauthor participating in a Decoding interview. We describe how Decoding combined with collaborative self-study had a positive impact on our teaching practice.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
Neural Decoder for Topological Codes
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Melko, Roger G.
2017-07-01
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.
Modeling the binding of fulvic acid by goethite: the speciation of adsorbed FA molecules
NASA Astrophysics Data System (ADS)
Filius, Jeroen D.; Meeussen, Johannes C. L.; Lumsdon, David G.; Hiemstra, Tjisse; van Riemsdijk, Willem H.
2003-04-01
Under natural conditions, the adsorption of ions at the solid-water interface may be strongly influenced by the adsorption of organic matter. In this paper, we describe the adsorption of fulvic acid (FA) by metal(hydr)oxide surfaces with a heterogeneous surface complexation model, the ligand and charge distribution (LCD) model. The model is a self-consistent combination of the nonideal competitive adsorption (NICA) equation and the CD-MUSIC model. The LCD model can describe simultaneously the concentration, pH, and salt dependency of the adsorption with a minimum of only three adjustable parameters. Furthermore, the model predicts the coadsorption of protons accurately for an extended range of conditions. Surface speciation calculations show that almost all hydroxyl groups of the adsorbed FA molecules are involved in outer sphere complexation reactions. The carboxylic groups of the adsorbed FA molecule form inner and outer sphere complexes. Furthermore, part of the carboxylate groups remain noncoordinated and deprotonated.
Hagen, Ferry; Lumbsch, H Thorsten; Arsic Arsenijevic, Valentina; Badali, Hamid; Bertout, Sebastien; Billmyre, R Blake; Bragulat, M Rosa; Cabañes, F Javier; Carbia, Mauricio; Chakrabarti, Arunaloke; Chaturvedi, Sudha; Chaturvedi, Vishnu; Chen, Min; Chowdhary, Anuradha; Colom, Maria-Francisca; Cornely, Oliver A; Crous, Pedro W; Cuétara, Maria S; Diaz, Mara R; Espinel-Ingroff, Ana; Fakhim, Hamed; Falk, Rama; Fang, Wenjie; Herkert, Patricia F; Ferrer Rodríguez, Consuelo; Fraser, James A; Gené, Josepa; Guarro, Josep; Idnurm, Alexander; Illnait-Zaragozi, María-Teresa; Khan, Ziauddin; Khayhan, Kantarawee; Kolecka, Anna; Kurtzman, Cletus P; Lagrou, Katrien; Liao, Wanqing; Linares, Carlos; Meis, Jacques F; Nielsen, Kirsten; Nyazika, Tinashe K; Pan, Weihua; Pekmezovic, Marina; Polacheck, Itzhack; Posteraro, Brunella; de Queiroz Telles, Flavio; Romeo, Orazio; Sánchez, Manuel; Sampaio, Ana; Sanguinetti, Maurizio; Sriburee, Pojana; Sugita, Takashi; Taj-Aldeen, Saad J; Takashima, Masako; Taylor, John W; Theelen, Bart; Tomazin, Rok; Verweij, Paul E; Wahyuningsih, Retno; Wang, Ping; Boekhout, Teun
2017-01-01
Cryptococcosis is a major fungal disease caused by members of the Cryptococcus gattii and Cryptococcus neoformans species complexes. After more than 15 years of molecular genetic and phenotypic studies and much debate, a proposal for a taxonomic revision was made. The two varieties within C. neoformans were raised to species level, and the same was done for five genotypes within C. gattii . In a recent perspective (K. J. Kwon-Chung et al., mSphere 2:e00357-16, 2017, https://doi.org/10.1128/mSphere.00357-16), it was argued that this taxonomic proposal was premature and without consensus in the community. Although the authors of the perspective recognized the existence of genetic diversity, they preferred the use of the informal nomenclature " C. neoformans species complex" and " C. gattii species complex." Here we highlight the advantage of recognizing these seven species, as ignoring these species will impede deciphering further biologically and clinically relevant differences between them, which may in turn delay future clinical advances.
Complex Refractive Index of Ice Fog at a Radio Wavelength of 3 mm
1974-10-01
particles on mirror surface 55 41. Liberally dusting mirror with 3-// polystyrene dust produces almost no effect 56 —" -.-.—-- ■ .■..^„^-^aa...spheres, as Debye (1929) did in his original theory. The rotation of these spheres in a viscous medium is opposed by forces related by Stoke’s law to...given by a relation of the form ,_ C „D/kT E/RT Z 0 where R k ■ M.C \\s only slightly temperature dependent, and n/w is the average time required by
mRNA 3' of the A site bound codon is located close to protein S3 on the human 80S ribosome.
Molotkov, Maxim V; Graifer, Dmitri M; Popugaeva, Elena A; Bulygin, Konstantin N; Meschaninova, Maria I; Ven'yaminova, Aliya G; Karpova, Galina G
2006-07-01
Ribosomal proteins neighboring the mRNA downstream of the codon bound at the decoding site of human 80S ribosomes were identified using three sets of mRNA analogues that contained a UUU triplet at the 5' terminus and a perfluorophenylazide cross-linker at guanosine, adenosine or uridine residues placed at various locations 3' of this triplet. The positions of modified mRNA nucleotides on the ribosome were governed by tRNA(Phe) cognate to the UUU triplet targeted to the P site. Upon mild UV-irradiation, the mRNA analogues cross-linked preferentially to the 40S subunit, to the proteins and to a lesser extent to the 18S rRNA. Cross-linked nucleotides of 18S rRNA were identified previously. In the present study, it is shown that among the proteins the main target for cross-linking with all the mRNA analogues tested was protein S3 (homologous to prokaryotic S3, S3p); minor cross-linking to protein S2 (S5p) was also detected. Both proteins cross-linked to mRNA analogues in the ternary complexes as well as in the binary complexes (without tRNA). In the ternary complexes protein S15 (S19p) also cross-linked, the yield of the cross-link decreased significantly when the modified nucleotide moved from position +5 to position +12 with respect to the first nucleotide of the P site bound codon. In several ternary complexes minor cross-linking to protein S30 was likewise detected. The results of this study indicate that S3 is a key protein at the mRNA binding site neighboring mRNA downstream of the codon at the decoding site in the human ribosome.
Oya, Hiroyuki; Howard, Matthew A.; Adolphs, Ralph
2008-01-01
Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. PMID:19065268
Moraitou, Despina; Papantoniou, Georgia; Gkinopoulos, Theofilos; Nigritinou, Magdalini
2013-09-01
Although the ability to recognize emotions through bodily and facial muscular movements is vital to everyday life, numerous studies have found that older adults are less adept at identifying emotions than younger adults. The message gleaned from research has been one of greater decline in abilities to recognize specific negative emotions than positive ones. At the same time, these results raise methodological issues with regard to different modalities in which emotion decoding is measured. The main aim of the present study is to identify the pattern of age differences in the ability to decode basic emotions from naturalistic visual emotional displays. The sample comprised a total of 208 adults from Greece, aged from 18 to 86 years. Participants were examined using the Emotion Evaluation Test, which is the first part of a broader audiovisual tool, The Awareness of Social Inference Test. The Emotion Evaluation Test was designed to examine a person's ability to identify six emotions and discriminate these from neutral expressions, as portrayed dynamically by professional actors. The findings indicate that decoding of basic emotions occurs along the broad affective dimension of uncertainty, and a basic step in emotion decoding involves recognizing whether information presented is emotional or not. Age was found to negatively affect the ability to decode basic negatively valenced emotions as well as pleasant surprise. Happiness decoding is the only ability that was found well-preserved with advancing age. The main conclusion drawn from the study is that the pattern in which emotion decoding from visual cues is affected by normal ageing depends on the rate of uncertainty, which either is related to decoding difficulties or is inherent to a specific emotion. © 2013 The Authors. Psychogeriatrics © 2013 Japanese Psychogeriatric Society.
Decoding Individual Finger Movements from One Hand Using Human EEG Signals
Gonzalez, Jania; Ding, Lei
2014-01-01
Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies. PMID:24416360
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
47 CFR 79.103 - Closed caption decoder requirements for apparatus.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.103 Closed caption decoder requirements... video programming transmitted simultaneously with sound, if such apparatus is manufactured in the United... with built-in closed caption decoder circuitry or capability designed to display closed-captioned video...
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment. PMID:27898712
Modified Dynamic Decode-and-Forward Relaying Protocol for Type II Relay in LTE-Advanced and Beyond.
Nam, Sung Sik; Alouini, Mohamed-Slim; Choi, Seyeong
2016-01-01
In this paper, we propose a modified dynamic decode-and-forward (MoDDF) relaying protocol to meet the critical requirements for user equipment (UE) relays in next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed MoDDF realizes the fast jump-in relaying and the sequential decoding with an application of random codeset to encoding and re-encoding process at the source and the multiple UE relays, respectively. A subframe-by-subframe decoding based on the accumulated (or buffered) messages is employed to achieve energy, information, or mixed combining. Finally, possible early termination of decoding at the end user can lead to the higher spectral efficiency and more energy saving by reducing the frequency of redundant subframe transmission and decoding. These attractive features eliminate the need of directly exchanging control messages between multiple UE relays and the end user, which is an important prerequisite for the practical UE relay deployment.
Aroudi, Ali; Doclo, Simon
2017-07-01
To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.
A power-efficient communication system between brain-implantable devices and external computers.
Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui
2007-01-01
In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rey, D.; Ryan, W.; Ross, M.
A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, wasmore » developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.« less
Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad
2016-01-01
Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis. PMID:27658585
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
Voluntary Enhancement of Neural Signatures of Affiliative Emotion Using fMRI Neurofeedback
Moll, Jorge; Weingartner, Julie H.; Bado, Patricia; Basilio, Rodrigo; Sato, João R.; Melo, Bruno R.; Bramati, Ivanei E.; de Oliveira-Souza, Ricardo; Zahn, Roland
2014-01-01
In Ridley Scott’s film “Blade Runner”, empathy-detection devices are employed to measure affiliative emotions. Despite recent neurocomputational advances, it is unknown whether brain signatures of affiliative emotions, such as tenderness/affection, can be decoded and voluntarily modulated. Here, we employed multivariate voxel pattern analysis and real-time fMRI to address this question. We found that participants were able to use visual feedback based on decoded fMRI patterns as a neurofeedback signal to increase brain activation characteristic of tenderness/affection relative to pride, an equally complex control emotion. Such improvement was not observed in a control group performing the same fMRI task without neurofeedback. Furthermore, the neurofeedback-driven enhancement of tenderness/affection-related distributed patterns was associated with local fMRI responses in the septohypothalamic area and frontopolar cortex, regions previously implicated in affiliative emotion. This demonstrates that humans can voluntarily enhance brain signatures of tenderness/affection, unlocking new possibilities for promoting prosocial emotions and countering antisocial behavior. PMID:24847819
Structural basis for 16S ribosomal RNA cleavage by the cytotoxic domain of colicin E3.
Ng, C Leong; Lang, Kathrin; Meenan, Nicola Ag; Sharma, Amit; Kelley, Ann C; Kleanthous, Colin; Ramakrishnan, V
2010-10-01
The toxin colicin E3 targets the 30S subunit of bacterial ribosomes and cleaves a phosphodiester bond in the decoding center. We present the crystal structure of the 70S ribosome in complex with the cytotoxic domain of colicin E3 (E3-rRNase). The structure reveals how the rRNase domain of colicin binds to the A site of the decoding center in the 70S ribosome and cleaves the 16S ribosomal RNA (rRNA) between A1493 and G1494. The cleavage mechanism involves the concerted action of conserved residues Glu62 and His58 of the cytotoxic domain of colicin E3. These residues activate the 16S rRNA for 2' OH-induced hydrolysis. Conformational changes observed for E3-rRNase, 16S rRNA and helix 69 of 23S rRNA suggest that a dynamic binding platform is required for colicin E3 binding and function.
NASA Technical Reports Server (NTRS)
2007-01-01
On April 24, a group traveling with Diamond Tours visited StenniSphere, the visitor center at NASA John C. Stennis Space Center in South Mississippi. The trip marked Diamond Tours' return to StenniSphere since Hurricane Katrina struck the Gulf Coast on Aug. 29, 2005. About 25 business professionals from Georgia enjoyed the day's tour of America's largest rocket engine test complex, along with the many displays and exhibits at the museum. Before Hurricane Katrina, the nationwide company brought more than 1,000 visitors to StenniSphere each month. That contributed to more than 100,000 visitors from around the world touring the space center each year. In past years StenniSphere's visitor relations specialists booked Diamond Tours two or three times a week, averaging 40 to 50 people per visit. SSC was established in the 1960s to test the huge engines for the Saturn V moon rockets. Now 40 years later, the center tests every main engine for the space shuttle. SSC will soon begin testing the rocket engines that will power spacecraft carrying Americans back to the moon and on to Mars. For more information or to book a tour, visit http://www.nasa.gov/centers/stennis/home/index.html and click on the StenniSphere logo; or call 800-237-1821 or 228-688-2370.
2007-04-27
On April 24, a group traveling with Diamond Tours visited StenniSphere, the visitor center at NASA John C. Stennis Space Center in South Mississippi. The trip marked Diamond Tours' return to StenniSphere since Hurricane Katrina struck the Gulf Coast on Aug. 29, 2005. About 25 business professionals from Georgia enjoyed the day's tour of America's largest rocket engine test complex, along with the many displays and exhibits at the museum. Before Hurricane Katrina, the nationwide company brought more than 1,000 visitors to StenniSphere each month. That contributed to more than 100,000 visitors from around the world touring the space center each year. In past years StenniSphere's visitor relations specialists booked Diamond Tours two or three times a week, averaging 40 to 50 people per visit. SSC was established in the 1960s to test the huge engines for the Saturn V moon rockets. Now 40 years later, the center tests every main engine for the space shuttle. SSC will soon begin testing the rocket engines that will power spacecraft carrying Americans back to the moon and on to Mars. For more information or to book a tour, visit http://www.nasa.gov/centers/stennis/home/index.html and click on the StenniSphere logo; or call 800-237-1821 or 228-688-2370.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
ERIC Educational Resources Information Center
Gates, Louis
2018-01-01
The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization--this generalization unifies…
Oppositional Decoding as an Act of Resistance.
ERIC Educational Resources Information Center
Steiner, Linda
1988-01-01
Argues that contributors to the "No Comment" feature of "Ms." magazine are engaging in oppositional decoding and speculates on why this is a satisfying group process. Also notes such decoding presents another challenge to the idea that mass media has the same effect on all audiences. (SD)
Adsorption of selenium by amorphous iron oxyhydroxide and manganese dioxide
Balistrieri, L.S.; Chao, T.T.
1990-01-01
This work compares and models the adsorption of selenium and other anions on a neutral to alkaline surface (amorphous iron oxyhydroxide) and an acidic surface (manganese dioxide). Selenium adsorption on these oxides is examined as a function of pH, particle concentration, oxidation state, and competing anion concentration in order to assess how these factors might influence the mobility of selenium in the environment. The data indicate that 1. 1) amorphous iron oxyhydroxide has a greater affinity for selenium than manganese dioxide, 2. 2) selenite [Se(IV)] adsorption increases with decreasing pH and increasing particle concentration and is stronger than selenate [Se(VI)] adsorption on both oxides, and 3. 3) selenate does not adsorb on manganese dioxide. The relative affinity of selenate and selenite for the oxides and the lack of adsorption of selenate on a strongly acidic surface suggests that selenate forms outer-sphere complexes while selenite forms inner-sphere complexes with the surfaces. The data also indicate that the competition sequence of other anions with respect to selenite adsorption at pH 7.0 is phosphate > silicate > molybdate > fluoride > sulfate on amorphous iron oxyhydroxide and molybdate ??? phosphate > silicate > fluoride > sulfate on manganese dioxide. The adsorption of phosphate, molybdate, and silicate on these oxides as a function of pH indicates that the competition sequences reflect the relative affinities of these anions for the surfaces. The Triple Layer surface complexation model is used to provide a quantitative description of these observations and to assess the importance of surface site heterogeneity on anion adsorption. The modeling results suggest that selenite forms binuclear, innersphere complexes with amorphous iron oxyhydroxide and monodentate, inner-sphere complexes with manganese dioxide and that selenate forms outer-sphere, monodentate complexes with amorphous iron oxyhydroxide. The heterogeneity of the oxide surface sites is reflected in decreasing equilibrium constants for selenite with increasing adsorption density and both experimental observations and modeling results suggest that manganese dioxide has fewer sites of higher energy for selenite adsorption than amorphous iron oxyhydroxide. Modeling and interpreting the adsorption of phosphate, molybdate, and silicate on the oxides are made difficult by the lack of constraint in choosing surface species and the fact that equally good fits can be obtained with different surface species. Finally, predictions of anion competition using the model results from single adsorbate systems are not very successful because the model does not account for surface site heterogeneity. Selenite adsorption data from a multi-adsorbate system could be fit if the equilibrium constant for selenite is decreased with increasing anion adsorption density. ?? 1990.
Equilibrium location for spherical DNA and toroidal cyclodextrin
NASA Astrophysics Data System (ADS)
Sarapat, Pakhapoom; Baowan, Duangkamon; Hill, James M.
2018-05-01
Cyclodextrin comprises a ring structure composed of glucose molecules with an ability to form complexes of certain substances within its central cavity. The compound can be utilised for various applications including food, textiles, cosmetics, pharmaceutics, and gene delivery. In gene transfer, the possibility of forming complexes depends upon the interaction energy between cyclodextrin and DNA molecules which here are modelled as a torus and a sphere, respectively. Our proposed model is derived using the continuum approximation together with the Lennard-Jones potential, and the total interaction energy is obtained by integrating over both the spherical and toroidal surfaces. The results suggest that the DNA prefers to be symmetrically situated about 1.2 Å above the centre of the cyclodextrin to minimise its energy. Furthermore, an optimal configuration can be determined for any given size of torus and sphere.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
2012-05-01
field-programmable gate array (FPGA) uses digital signal processing (DSP) algorithms to decode echo-location information from the backscattered signal ...characterizing and understanding of the physical properties of the BST and PZT thin films. Using microwave reflection spectroscopy, the complex...acoustic data, , would be encoded in the reflected MW signal by means of phase modulation (PM). By using high-Q resonators as the reactive
Preservation of organic matter in marine sediments by inner-sphere interactions with reactive iron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, Andrew; Brandes, Jay; Leri, Alessandra
Interactions between organic matter and mineral matrices are critical to the preservation of soil and sediment organic matter. In addition to clay minerals, Fe(III) oxides particles have recently been shown to be responsible for the protection and burial of a large fraction of sedimentary organic carbon (OC). Through a combination of synchrotron X-ray techniques and high-resolution images of intact sediment particles, we assessed the mechanism of interaction between OC and iron, as well as the composition of organic matter co-localized with ferric iron. We present scanning transmission x-ray microscopy images at the Fe L 3 and C K1 edges showingmore » that the organic matter co-localized with Fe(III) consists primarily of C=C, C=O and C-OH functional groups. Coupling the co-localization results to iron K-edge X-ray absorption spectroscopy fitting results allowed to quantify the relative contribution of OC-complexed Fe to the total sediment iron and reactive iron pools, showing that 25–62% of total reactive iron is directly associated to OC through inner-sphere complexation in coastal sediments, as much as four times more than in low OC deep sea sediments. Direct inner-sphere complexation between OC and iron oxides (Fe-O-C) is responsible for transferring a large quantity of reduced OC to the sedimentary sink, which could otherwise be oxidized back to CO 2.« less
The Adsorption of Cd(II) on Manganese Oxide Investigated by Batch and Modeling Techniques.
Huang, Xiaoming; Chen, Tianhu; Zou, Xuehua; Zhu, Mulan; Chen, Dong; Pan, Min
2017-09-28
Manganese (Mn) oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II) on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II) concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R² > 0.999). The adsorption of Cd(II) on Mn oxide significantly decreased with increasing ionic strength at pH < 5.0, whereas Cd(II) adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II) on Mn oxide at pH < 5.0 and pH > 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II) calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II) on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by ion exchange sites (X₂Cd) at low pH and inner-sphere surface complexation sites (SOCd⁺ and (SO)₂CdOH - species) at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water-mineral interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Zhizhang; Ilton, Eugene S.; Prange, Micah P.
Classical molecular dynamics (MD) simulations were used to study the interactions of up to 2 M NaCl and NaNO3 aqueous solutions with the presumed inert boehmite (010) and gibbsite (001) surfaces. The force field parameters used in these simulations were validated against density functional theory calculations of Na+ and Cl- hydrated complexes adsorbed at the boehmite (010) surface. In all the classical MD simulations and regardless of the ionic strength or the nature of the anion, Na+ ions were found to preferably form inner-sphere complexes over outer-sphere complexes at the aluminum (oxy)hydroxide surfaces, adsorbing closer to the surface than bothmore » water molecules and anions. In contrast, Cl- ions were distributed almost equally between inner- and outer-sphere positions. The resulting asymmetry in adsorption strengths offers molecular-scale evidence for the observed isoelectric point (IEP) shift to higher pH at high ionic strength for aluminum (oxy)hydroxides. As such, the MD simulations also provided clear evidence against the assumption that the basal surfaces of boehmite and gibbsite are inert to background electrolytes. Finally, the MD simulations indicated that, although the adsorption behavior of Na+ in NaNO3 and NaCl solutions was similar, the different affinities of NO3- and Cl- for the aluminum (oxy)hydroxide surfaces might have macroscopic consequences, such as difference in the sensitivity of the IEP to the electrolyte concentration.« less
On the Effect of Sphere-Overlap on Super Coarse-Grained Models of Protein Assemblies
NASA Astrophysics Data System (ADS)
Degiacomi, Matteo T.
2018-05-01
Ion mobility mass spectrometry (IM/MS) can provide structural information on intact protein complexes. Such data, including connectivity and collision cross sections (CCS) of assemblies' subunits, can in turn be used as a guide to produce representative super coarse-grained models. These models are constituted by ensembles of overlapping spheres, each representing a protein subunit. A model is considered plausible if the CCS and sphere-overlap levels of its subunits fall within predetermined confidence intervals. While the first is determined by experimental error, the latter is based on a statistical analysis on a range of protein dimers. Here, we first propose a new expression to describe the overlap between two spheres. Then we analyze the effect of specific overlap cutoff choices on the precision and accuracy of super coarse-grained models. Finally, we propose a method to determine overlap cutoff levels on a per-case scenario, based on collected CCS data, and show that it can be applied to the characterization of the assembly topology of symmetrical homo-multimers. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Diestra Cruz, Heberth Alexander
The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Code of Federal Regulations, 2013 CFR
2013-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Code of Federal Regulations, 2012 CFR
2012-10-01
...: (1) Inputs. Decoders must have the capability to receive at least two audio inputs from EAS... externally, at least two minutes of audio or text messages. A decoder manufactured without an internal means to record and store audio or text must be equipped with a means (such as an audio or digital jack...
Hands-On Decoding: Guidelines for Using Manipulative Letters
ERIC Educational Resources Information Center
Pullen, Paige Cullen; Lane, Holly B.
2016-01-01
Manipulative objects have long been an essential tool in the development of mathematics knowledge and skills. A growing body of evidence suggests using manipulative letters for decoding practice is an also an effective method for teaching reading, particularly in improving the phonological and decoding skills of students at risk for reading…
The Contribution of Attentional Control and Working Memory to Reading Comprehension and Decoding
ERIC Educational Resources Information Center
Arrington, C. Nikki; Kulesz, Paulina A.; Francis, David J.; Fletcher, Jack M.; Barnes, Marcia A.
2014-01-01
Little is known about how specific components of working memory, namely, attentional processes including response inhibition, sustained attention, and cognitive inhibition, are related to reading decoding and comprehension. The current study evaluated the relations of reading comprehension, decoding, working memory, and attentional control in…
ERIC Educational Resources Information Center
Gregg, Noel; Hoy, Cheri; Flaherty, Donna Ann; Norris, Peggy; Coleman, Christopher; Davis, Mark; Jordan, Michael
2005-01-01
The vast majority of students with learning disabilities at the postsecondary level demonstrate reading decoding, reading fluency, and writing deficits. Identification of valid and reliable psychometric measures for documenting decoding and spelling disabilities at the postsecondary level is critical for determining appropriate accommodations. The…
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Lin, S.
1984-01-01
Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.
Molecular Simulation of Cesium Adsorption at the Basal Surface of Phyllosilicate Minerals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerisit, Sebastien N.; Okumura, Masahiko; Rosso, Kevin M.
2016-08-16
A better understanding of the thermodynamics of radioactive cesium uptake at the surfaces of phyllosilicate minerals is needed to understand mechanisms of its selective adsorption and help guide the development of practical and inexpensive decontamination techniques. In this work, molecular dynamics simulations were carried out to determine the thermodynamics of adsorption of Cs + at the basal surface of six 2:1 phyllosilicate minerals, namely pyrophyllite, illite, muscovite, phlogopite, celadonite, and margarite. These minerals were selected to isolate the effects of the magnitude of the permanent layer charge (≤ 2), its location (tetrahedral versus octahedral sheet), and the structure of themore » octahedral sheet (dioctahedral versus trioctahedral). Good agreement was obtained with experiment in terms of the hydration free energy of Cs + and the structure and thermodynamics of Cs + adsorption at the muscovite basal surface, for which published data were available for comparison. With the exception of pyrophyllite, which did not exhibit an inner-sphere free energy minimum, all phyllosilicate minerals showed similar behavior with respect to Cs + adsorption; notably, Cs + adsorption was predominantly inner-sphere whereas outer-sphere adsorption was very weak with the simulations predicting the formation of an extended outer-sphere complex. For a given location of the layer charge, the free energy of adsorption as an inner-sphere complex was found to vary linearly with the magnitude of the layer charge. For a given location and magnitude of the layer charge, adsorption at phlogopite (trioctahedral sheet structure) was much less favorable than at muscovite (dioctahedral sheet structure) due to the electrostatic repulsion between the adsorbed Cs + and the hydrogen atom of the hydroxyl group directly below the six-membered siloxane ring cavity. For a given magnitude of the layer charge and structure of the octahedral sheet, adsorption at celadonite (layer charge located in the octahedral sheet) was favored over muscovite (layer charge located in the tetrahedral sheet) due to the increased distance with surface potassium ions.« less
A /31,15/ Reed-Solomon Code for large memory systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.
Information encoder/decoder using chaotic systems
Miller, Samuel Lee; Miller, William Michael; McWhorter, Paul Jackson
1997-01-01
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals.
Information encoder/decoder using chaotic systems
Miller, S.L.; Miller, W.M.; McWhorter, P.J.
1997-10-21
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals. 32 figs.
Node synchronization schemes for the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Swanson, L.; Arnold, S.
1992-01-01
The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.
1988-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe
2017-12-01
The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Khorasani, Abed; Heydari Beni, Nargess; Shalchyan, Vahid; Daliri, Mohammad Reza
2016-10-21
Local field potential (LFP) signals recorded by intracortical microelectrodes implanted in primary motor cortex can be used as a high informative input for decoding of motor functions. Recent studies show that different kinematic parameters such as position and velocity can be inferred from multiple LFP signals as precisely as spiking activities, however, continuous decoding of the force magnitude from the LFP signals in freely moving animals has remained an open problem. Here, we trained three rats to press a force sensor for getting a drop of water as a reward. A 16-channel micro-wire array was implanted in the primary motor cortex of each trained rat, and obtained LFP signals were used for decoding of the continuous values recorded by the force sensor. Average coefficient of correlation and the coefficient of determination between decoded and actual force signals were r = 0.66 and R 2 = 0.42, respectively. We found that LFP signal on gamma frequency bands (30-120 Hz) had the most contribution in the trained decoding model. This study suggests the feasibility of using low number of LFP channels for the continuous force decoding in freely moving animals resembling BMI systems in real life applications.
Electrophysiological difference between mental state decoding and mental state reasoning.
Cao, Bihua; Li, Yiyuan; Li, Fuhong; Li, Hong
2012-06-29
Previous studies have explored the neural mechanism of Theory of Mind (ToM), but the neural correlates of its two components, mental state decoding and mental state reasoning, remain unclear. In the present study, participants were presented with various photographs, showing an actor looking at 1 of 2 objects, either with a happy or an unhappy expression. They were asked to either decode the emotion of the actor (mental state decoding task), predict which object would be chosen by the actor (mental state reasoning task), or judge at which object the actor was gazing (physical task), while scalp potentials were recorded. Results showed that (1) the reasoning task elicited an earlier N2 peak than the decoding task did over the prefrontal scalp sites; and (2) during the late positive component (240-440 ms), the reasoning task elicited a more positive deflection than the other two tasks did at the prefrontal scalp sites. In addition, neither the decoding task nor the reasoning task has no left/right hemisphere difference. These findings imply that mental state reasoning differs from mental state decoding early (210 ms) after stimulus onset, and that the prefrontal lobe is the neural basis of mental state reasoning. Copyright © 2012 Elsevier B.V. All rights reserved.
Reading skills of students with speech sound disorders at three stages of literacy development.
Skebo, Crysten M; Lewis, Barbara A; Freebairn, Lisa A; Tag, Jessica; Avrich Ciesla, Allison; Stein, Catherine M
2013-10-01
The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). In a cross-sectional design, students ages 7;0 (years;months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children's literacy stages.
Reading Skills of Students With Speech Sound Disorders at Three Stages of Literacy Development
Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.
2015-01-01
Purpose The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). Method In a cross-sectional design, students ages 7;0 (years; months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. Results For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Conclusion Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children’s literacy stages. PMID:23833280
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
NASA Astrophysics Data System (ADS)
Challis, R. E.; Tebbutt, J. S.; Holmes, A. K.
1998-12-01
The aim of this paper is to present a unified approach to the calculation of the complex wavenumber for a randomly distributed ensemble of homogeneous isotropic spheres suspended in a homogeneous isotropic continuum. Three classical formulations of the diffraction problem for a compression wave incident on a single particle are reviewed; the first is for liquid particles in a liquid continuum (Epstein and Carhart), the second for solid or liquid particles in a liquid continuum (Allegra and Hawley), and the third for solid particles in a solid continuum (Ying and Truell). Equivalences between these formulations are demonstrated and it is shown that the Allegra and Hawley formulation can be adapted to provide a basis for calculation in all three regimes. The complex wavenumber that results from an ensemble of such scatterers is treated using the formulations of Foldy (simple forward scattering), Waterman and Truell, and Lloyd and Berry (multiple scattering). The analysis is extended to provide an approximation for the case of a distribution of particle sizes in the mixture. A number of experimental measurements using a broadband spectrometric technique (reported elsewhere) to obtain the attenuation coefficient and phase velocity as functions of frequency are presented for various mixtures of differing contrasts in physical properties between phases in order to provide a comparison with theory. The materials used were aqueous suspensions of polystyrene spheres, silica spheres, iron spheres, 0022-3727/31/24/012/img1 pigment (AHR), droplets of 1-bromohexadecane, and a suspension of talc particles in a cured epoxy resin.
Arai, Y.; McBeath, M.; Bargar, J.R.; Joye, J.; Davis, J.A.
2006-01-01
Macro- and molecular-scale knowledge of uranyl (U(VI)) partitioning reactions with soil/sediment mineral components is important in predicting U(VI) transport processes in the vadose zone and aquifers. In this study, U(VI) reactivity and surface speciation on a poorly crystalline aluminosilicate mineral, synthetic imogolite, were investigated using batch adsorption experiments, X-ray absorption spectroscopy (XAS), and surface complexation modeling. U(VI) uptake on imogolite surfaces was greatest at pH ???7-8 (I = 0.1 M NaNO3 solution, suspension density = 0.4 g/L [U(VI)]i = 0.01-30 ??M, equilibration with air). Uranyl uptake decreased with increasing sodium nitrate concentration in the range from 0.02 to 0.5 M. XAS analyses show that two U(VI) inner-sphere (bidentate mononuclear coordination on outer-wall aluminol groups) and one outer-sphere surface species are present on the imogolite surface, and the distribution of the surface species is pH dependent. At pH 8.8, bis-carbonato inner-sphere and tris-carbonato outer-sphere surface species are present. At pH 7, bis- and non-carbonato inner-sphere surface species co-exist, and the fraction of bis-carbonato species increases slightly with increasing I (0.1-0.5 M). At pH 5.3, U(VI) non-carbonato bidentate mononuclear surface species predominate (69%). A triple layer surface complexation model was developed with surface species that are consistent with the XAS analyses and macroscopic adsorption data. The proton stoichiometry of surface reactions was determined from both the pH dependence of U(VI) adsorption data in pH regions of surface species predominance and from bond-valence calculations. The bis-carbonato species required a distribution of surface charge between the surface and ?? charge planes in order to be consistent with both the spectroscopic and macroscopic adsorption data. This research indicates that U(VI)-carbonato ternary species on poorly crystalline aluminosilicate mineral surfaces may be important in controlling U(VI) mobility in low-temperature geochemical environments over a wide pH range (???5-9), even at the partial pressure of carbon dioxide of ambient air (pCO2 = 10-3.45 atm). ?? 2006 Elsevier Inc. All rights reserved.
SATO, Osamu
2012-01-01
Various molecular magnetic compounds whose magnetic properties can be controlled by external stimuli have been developed, including electrochemically, photochemically, and chemically tunable bulk magnets as well as a phototunable antiferromagnetic phase of single chain magnet. In addition, we present tunable paramagnetic mononuclear complexes ranging from spin crossover complexes and valence tautomeric complexes to Co complexes in which orbital angular momentum can be switched. Furthermore, we recently developed several switchable clusters and one-dimensional coordination polymers. The switching of magnetic properties can be achieved by modulating metals, ligands, and molecules/ions in the second sphere of the complexes. PMID:22728438
Deciphering the glycosaminoglycan code with the help of microarrays.
de Paz, Jose L; Seeberger, Peter H
2008-07-01
Carbohydrate microarrays have become a powerful tool to elucidate the biological role of complex sugars. Microarrays are particularly useful for the study of glycosaminoglycans (GAGs), a key class of carbohydrates. The high-throughput chip format enables rapid screening of large numbers of potential GAG sequences produced via a complex biosynthesis while consuming very little sample. Here, we briefly highlight the most recent advances involving GAG microarrays built with synthetic or naturally derived oligosaccharides. These chips are powerful tools for characterizing GAG-protein interactions and determining structure-activity relationships for specific sequences. Thereby, they contribute to decoding the information contained in specific GAG sequences.
Results of medical studies during long-term manned flights on the orbital Salyut-6 and Soyuz complex
NASA Technical Reports Server (NTRS)
Yegorov, A. D. (Compiler)
1979-01-01
Results of tests made on the crews of the Salyut-6 and Soyuz complex are presented. The basic results of studies made before, during and after 96-day and 140-day flights are presented in 5 sections: characteristics of flight conditions in the orbital complex; the cardiovascular system; the motor sphere and vestibular analyzer; biochemical, hematologic and immunologic studies; and recovery measures in the readaptation period.
1978-09-12
the population. Only a socialist, planned economy can cope with such problems. However, the in- creasing complexity of the tasks faced’ by...the development of systems allowing man-machine dialogue does not decrease, but rather increase the complexity of the systems involved, simply...shifting the complexity to another sphere, where it is invisible to the human utilizing the system. Figures 5; refer- ences 3: 2 Russian, 1 Western
The role of the medial temporal limbic system in processing emotions in voice and music.
Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier
2014-12-01
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.
A method for decoding the neurophysiological spike-response transform.
Stern, Estee; García-Crescioni, Keyla; Miller, Mark W; Peskin, Charles S; Brezina, Vladimir
2009-11-15
Many physiological responses elicited by neuronal spikes-intracellular calcium transients, synaptic potentials, muscle contractions-are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions-the elementary response kernel and additional kernels or functions that describe the dependence on previous history-that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the "synaptic decoding" approach of Sen et al. (1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.