Sample records for large coding gains

  1. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  2. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    PubMed

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  3. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  4. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  5. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  6. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  7. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  8. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  9. Performance Analysis of a New Coded TH-CDMA Scheme in Dispersive Infrared Channel with Additive Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Hamdi, Mazda; Kenari, Masoumeh Nasiri

    2013-06-01

    We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.

  10. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  11. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  12. Multi-stage decoding of multi-level modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  13. A bandwidth efficient coding scheme for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J., Jr.

    1991-01-01

    As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.

  14. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  15. Pseudo-orthogonal frequency coded wireless SAW RFID temperature sensor tags.

    PubMed

    Saldanha, Nancy; Malocha, Donald C

    2012-08-01

    SAW sensors are ideal for various wireless, passive multi-sensor applications because they are small, rugged, radiation hard, and offer a wide range of material choices for operation over broad temperature ranges. The readable distance of a tag in a multi-sensor environment is dependent on the insertion loss of the device and the processing gain of the system. Single-frequency code division multiple access (CDMA) tags that are used in high-volume commercial applications must have universal coding schemes and large numbers of codes. The use of a large number of bits at the common center frequency to achieve sufficient code diversity in CDMA tags necessitates reflector banks with >30 dB loss. Orthogonal frequency coding is a spread-spectrum approach that employs frequency and time diversity to achieve enhanced tag properties. The use of orthogonal frequency coded (OFC) SAW tags reduces adjacent reflector interactions for low insertion loss, increased range, complex coding, and system processing gain. This work describes a SAW tag-sensor platform that reduces device loss by implementing long reflector banks with optimized spectral coding. This new pseudo-OFC (POFC) coding is defined and contrasted with the previously defined OFC coding scheme. Auto- and cross-correlation properties of the chips and their relation to reflectivity per strip and reflector length are discussed. Results at 250 MHz of 8-chip OFC and POFC SAW tags will be compared. The key parameters of insertion loss, cross-correlation, and autocorrelation of the two types of frequency-coded tags will be analyzed, contrasted, and discussed. It is shown that coded reflector banks can be achieved with near-zero loss and still maintain good coding properties. Experimental results and results predicted by the coupling of modes model are presented for varying reflector designs and codes. A prototype 915-MHz POFC sensor tag is used as a wireless temperature sensor and the results are shown.

  16. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  18. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  19. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  20. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  1. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. ForTrilinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Johnson, Seth R; Prokopenko, Andrey V

    'ForTrilinos' is related to The Trilinos Project, which contains a large and growing collection of solver capabilities that can utilize next-generation platforms, in particular scalable multicore, manycore, accelerator and heterogeneous systems. Trilinos is primarily written in C++, including its user interfaces. While C++ is advantageous for gaining access to the latest programming environments, it limits Trilinos usage via Fortran. Sever ad hoc translation interfaces exist to enable Fortran usage of Trilinos, but none of these interfaces is general-purpose or written for reusable and sustainable external use. 'ForTrilinos' provides a seamless pathway for large and complex Fortran-based codes to access Trilinosmore » without C/C++ interface code. This access includes Fortran versions of Kokkos abstractions for code execution and data management.« less

  3. HASEonGPU-An adaptive, load-balanced MPI/GPU-code for calculating the amplified spontaneous emission in high power laser media

    NASA Astrophysics Data System (ADS)

    Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.

    2016-10-01

    We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.

  4. Effects of internal gain assumptions in building energy calculations

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Perkins, R.

    1981-01-01

    The utilization of direct solar gains in buildings can be affected by operating profiles, such as schedules for internal gains, thermostat controls, and ventilation rates. Building energy analysis methods use various assumptions about these profiles. The effects of typical internal gain assumptions in energy calculations are described. Heating and cooling loads from simulations using the DOE 2.1 computer code are compared for various internal gain inputs: typical hourly profiles, constant average profiles, and zero gain profiles. Prototype single-family-detached and multifamily-attached residential units are studied with various levels of insulation and infiltration. Small detached commercial buildings and attached zones in large commercial buildings are studied with various levels of internal gains. The results indicate that calculations of annual heating and cooling loads are sensitive to internal gains, but in most cases are relatively insensitive to hourly variations in internal gains.

  5. An Extended Duopoly Game.

    ERIC Educational Resources Information Center

    Eckalbar, John C.

    2002-01-01

    Illustrates how principles and intermediate microeconomic students can gain an understanding for strategic price setting by playing a relatively large oligopoly game. Explains that the game extends to a continuous price space and outlines appropriate applications. Offers the Mathematica code to instructors so that the assumptions of the game can…

  6. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  7. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  8. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899

  9. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  10. Magnet system optimization for segmented adaptive-gap in-vacuum undulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitegi, C., E-mail: ckitegi@bnl.gov; Chubar, O.; Eng, C.

    2016-07-27

    Segmented Adaptive Gap in-vacuum Undulator (SAGU), in which different segments have different gaps and periods, promises a considerable spectral performance gain over a conventional undulator with uniform gap and period. According to calculations, this gain can be comparable to the gain achievable with a superior undulator technology (e.g. a room-temperature in-vacuum hybrid SAGU would perform as a cryo-cooled hybrid in-vacuum undulator with uniform gap and period). However, for reaching the high spectral performance, SAGU magnetic design has to include compensation of kicks experienced by the electron beam at segment junctions because of different deflection parameter values in the segments. Wemore » show that such compensation to large extent can be accomplished by using a passive correction, however, simple correction coils are nevertheless required as well to reach perfect compensation over a whole SAGU tuning range. Magnetic optimizations performed with Radia code, and the resulting undulator radiation spectra calculated using SRW code, demonstrating a possibility of nearly perfect correction, are presented.« less

  11. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  12. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  13. Extensive intron gain in the ancestor of placental mammals

    PubMed Central

    2011-01-01

    Background Genome-wide studies of intron dynamics in mammalian orthologous genes have found convincing evidence for loss of introns but very little for intron turnover. Similarly, large-scale analysis of intron dynamics in a few vertebrate genomes has identified only intron losses and no gains, indicating that intron gain is an extremely rare event in vertebrate evolution. These studies suggest that the intron-rich genomes of vertebrates do not allow intron gain. The aim of this study was to search for evidence of de novo intron gain in domesticated genes from an analysis of their exon/intron structures. Results A phylogenomic approach has been used to analyse all domesticated genes in mammals and chordates that originated from the coding parts of transposable elements. Gain of introns in domesticated genes has been reconstructed on well established mammalian, vertebrate and chordate phylogenies, and examined as to where and when the gain events occurred. The locations, sizes and amounts of de novo introns gained in the domesticated genes during the evolution of mammals and chordates has been analyzed. A significant amount of intron gain was found only in domesticated genes of placental mammals, where more than 70 cases were identified. De novo gained introns show clear positional bias, since they are distributed mainly in 5' UTR and coding regions, while 3' UTR introns are very rare. In the coding regions of some domesticated genes up to 8 de novo gained introns have been found. Intron densities in Eutheria-specific domesticated genes and in older domesticated genes that originated early in vertebrates are lower than those for normal mammalian and vertebrate genes. Surprisingly, the majority of intron gains have occurred in the ancestor of placentals. Conclusions This study provides the first evidence for numerous intron gains in the ancestor of placental mammals and demonstrates that adequate taxon sampling is crucial for reconstructing intron evolution. The findings of this comprehensive study slightly challenge the current view on the evolutionary stasis in intron dynamics during the last 100 - 200 My. Domesticated genes could constitute an excellent system on which to analyse the mechanisms of intron gain in placental mammals. Reviewers: this article was reviewed by Dan Graur, Eugene V. Koonin and Jürgen Brosius. PMID:22112745

  14. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  15. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  16. A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald D.; Bailey, David H.

    1987-01-01

    A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.

  17. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  18. Recent advances in multiview distributed video coding

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. Spherical hashing: binary code embedding with hyperspheres.

    PubMed

    Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui

    2015-11-01

    Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.

  1. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  2. Modeling chemical gradients in sediments under losing and gaining flow conditions: The GRADIENT code

    NASA Astrophysics Data System (ADS)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2018-02-01

    Interfaces between sediments and water bodies often represent biochemical hotspots for nutrient reactions and are characterized by steep concentration gradients of different reactive solutes. Vertical profiles of these concentrations are routinely collected to obtain information on nutrient dynamics, and simple codes have been developed to analyze these profiles and determine the magnitude and distribution of reaction rates within sediments. However, existing publicly available codes do not consider the potential contribution of water flow in the sediments to nutrient transport, and their applications to field sites with significant water-borne nutrient fluxes may lead to large errors in the estimated reaction rates. To fill this gap, the present work presents GRADIENT, a novel algorithm to evaluate distributions of reaction rates from observed concentration profiles. GRADIENT is a Matlab code that extends a previously published framework to include the role of nutrient advection, and provides robust estimates of reaction rates in sediments with significant water flow. This work discusses the theoretical basis of the method and shows its performance by comparing the results to a series of synthetic data and to laboratory experiments. The results clearly show that in systems with losing or gaining fluxes, the inclusion of such fluxes is critical for estimating local and overall reaction rates in sediments.

  3. Side-information-dependent correlation channel estimation in hash-based distributed video coding.

    PubMed

    Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter

    2012-04-01

    In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.

  4. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    NASA Technical Reports Server (NTRS)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.

  5. 75 FR 53019 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... soliciting comments concerning an existing regulation, REG-147144-06, (TD 9446) Section 1.367(a)-8, Gain...: Gain Recognition Agreements With Respect to Certain Transfers of Stock or Securities by United States... Internal Revenue Code (Code) concerning gain recognition agreements filed by United States persons with...

  6. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  7. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun

    2004-05-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less

  8. Stochastic Gain Degradation in III-V Heterojunction Bipolar Transistors due to Single Particle Displacement Damage

    DOE PAGES

    Vizkelethy, Gyorgy; Bielejec, Edward S.; Aguirre, Brandon A.

    2017-11-13

    As device dimensions decrease single displacement effects are becoming more important. We measured the gain degradation in III-V Heterojunction Bipolar Transistors due to single particles using a heavy ion microbeam. Two devices with different sizes were irradiated with various ion species ranging from oxygen to gold to study the effect of the irradiation ion mass on the gain change. From the single steps in the inverse gain (which is proportional to the number of defects) we calculated Cumulative Distribution Functions to help determine design margins. The displacement process was modeled using the Marlowe Binary Collision Approximation (BCA) code. The entiremore » structure of the device was modeled and the defects in the base-emitter junction were counted to be compared to the experimental results. While we found good agreement for the large device, we had to modify our model to reach reasonable agreement for the small device.« less

  9. Stochastic Gain Degradation in III-V Heterojunction Bipolar Transistors due to Single Particle Displacement Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vizkelethy, Gyorgy; Bielejec, Edward S.; Aguirre, Brandon A.

    As device dimensions decrease single displacement effects are becoming more important. We measured the gain degradation in III-V Heterojunction Bipolar Transistors due to single particles using a heavy ion microbeam. Two devices with different sizes were irradiated with various ion species ranging from oxygen to gold to study the effect of the irradiation ion mass on the gain change. From the single steps in the inverse gain (which is proportional to the number of defects) we calculated Cumulative Distribution Functions to help determine design margins. The displacement process was modeled using the Marlowe Binary Collision Approximation (BCA) code. The entiremore » structure of the device was modeled and the defects in the base-emitter junction were counted to be compared to the experimental results. While we found good agreement for the large device, we had to modify our model to reach reasonable agreement for the small device.« less

  10. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  11. Multiple Independent File Parallel I/O with HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(10 6) parallel tasks.

  12. Performance of Trellis Coded 256 QAM super-multicarrier modem VLSI's for SDH interface outage-free digital microwave radio

    NASA Astrophysics Data System (ADS)

    Aikawa, Satoru; Nakamura, Yasuhisa; Takanashi, Hitoshi

    1994-02-01

    This paper describes the performance of an outage free SXH (Synchronous Digital Hierarchy) interface 256 QAM modem. An outage free DMR (Digital Microwave Radio) is achieved by a high coding gain trellis coded SPORT QAM and Super Multicarrier modem. A new frame format and its associated circuits connect the outage free modem to the SDH interface. The newly designed VLSI's are key devices for developing the modem. As an overall modem performance, BER (bit error rate) characteristics and equipment signatures are presented. A coding gain of 4.7 dB (at a BER of 10(exp -4)) is obtained using SPORT 256 QAM and Viterbi decoding. This coding gain is realized by trellis coding as well as by increasing of transmission rate. Roll-off factor is decreased to maintain the same frequency occupation and modulation level as ordinary SDH 256 QAM modern.

  13. Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection

    PubMed Central

    Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao

    2015-01-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181

  14. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  15. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  16. A co-designed equalization, modulation, and coding scheme

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.

    1992-01-01

    The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.

  17. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  18. Neural Population Coding of Multiple Stimuli

    PubMed Central

    Ma, Wei Ji

    2015-01-01

    In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli. PMID:25740513

  19. Constrained coding for the deep-spaced optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B.; Hamkins, J.

    2002-01-01

    In this paper, we demonstrate a class of low-complexity modulation codes satisfying the (d,k) constraint that offer throughput gains over M-PPM on the order of 10-15%, which translate into SNR gains of .4 - .6 dB.

  20. Telemetry coding study for the international magnetosphere explorers, mother/daughter and heliocentric missions. Volume 2: Final report

    NASA Technical Reports Server (NTRS)

    Cartier, D. E.

    1973-01-01

    A convolutional coding theory is given for the IME and the Heliocentric spacecraft. The amount of coding gain needed by the mission is determined. Recommendations are given for an encoder/decoder system to provide the gain along with an evaluation of the impact of the system on the space network in terms of costs and complexity.

  1. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  2. The effect of a redundant color code on an overlearned identification task

    NASA Technical Reports Server (NTRS)

    Obrien, Kevin

    1992-01-01

    The possibility of finding redundancy gains with overlearned tasks was examined using a paradigm varying familiarity with the stimulus set. Redundant coding in a multidimensional stimulus was demonstrated to result in increased identification accuracy and decreased latency of identification when compared to stimuli varying on only one dimension. The advantages attributable to redundant coding are referred to as redundancy gain and were found for a variety of stimulus dimension combinations, including the use of hue or color as one of the dimensions. Factors that have affected redundancy gain include the discriminability of the levels of one stimulus dimension and the level of stimulus-to-response association. The results demonstrated that response time is in part a function of familiarity, but no effect of redundant color coding was demonstrated. Implications of research on coding in identification tasks for display design are discussed.

  3. Parallelization of the TRIGRS model for rainfall-induced landslides using the message passing interface

    USGS Publications Warehouse

    Alvioli, M.; Baum, R.L.

    2016-01-01

    We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.

  4. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  5. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    PubMed

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  6. On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Xu, Lei; Wang, Ting

    2010-12-01

    Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.

  7. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  8. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  9. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  10. Finite-SNR analysis for partial relaying cooperation with channel coding and opportunistic relay selection

    NASA Astrophysics Data System (ADS)

    Vu, Thang X.; Duhamel, Pierre; Chatzinotas, Symeon; Ottersten, Bjorn

    2017-12-01

    This work studies the performance of a cooperative network which consists of two channel-coded sources, multiple relays, and one destination. To achieve high spectral efficiency, we assume that a single time slot is dedicated to relaying. Conventional network-coded-based cooperation (NCC) selects the best relay which uses network coding to serve the two sources simultaneously. The bit error rate (BER) performance of NCC with channel coding, however, is still unknown. In this paper, we firstly study the BER of NCC via a closed-form expression and analytically show that NCC only achieves diversity of order two regardless of the number of available relays and the channel code. Secondly, we propose a novel partial relaying-based cooperation (PARC) scheme to improve the system diversity in the finite signal-to-noise ratio (SNR) regime. In particular, closed-form expressions for the system BER and diversity order of PARC are derived as a function of the operating SNR value and the minimum distance of the channel code. We analytically show that the proposed PARC achieves full (instantaneous) diversity order in the finite SNR regime, given that an appropriate channel code is used. Finally, numerical results verify our analysis and demonstrate a large SNR gain of PARC over NCC in the SNR region of interest.

  11. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  12. Implementation of an object oriented track reconstruction model into multiple LHC experiments*

    NASA Astrophysics Data System (ADS)

    Gaines, Irwin; Gonzalez, Saul; Qian, Sijin

    2001-10-01

    An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.

  13. A real-time chirp-coded imaging system with tissue attenuation compensation.

    PubMed

    Ramalli, A; Guidi, F; Boni, E; Tortoli, P

    2015-07-01

    In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a novel attenuation compensation scheme, which only requires to shift the demodulation frequency by 1 MHz. The proposed method characterizes for its simplicity and easy implementation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  16. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  17. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  18. Hyper-responsivity to losses in the anterior insula during economic choice scales with depression severity.

    PubMed

    Engelmann, J B; Berns, G S; Dunlop, B W

    2017-12-01

    Commonly observed distortions in decision-making among patients with major depressive disorder (MDD) may emerge from impaired reward processing and cognitive biases toward negative events. There is substantial theoretical support for the hypothesis that MDD patients overweight potential losses compared with gains, though the neurobiological underpinnings of this bias are uncertain. Twenty-one unmedicated patients with MDD were compared with 25 healthy controls (HC) using functional magnetic resonance imaging (fMRI) together with an economic decision-making task over mixed lotteries involving probabilistic gains and losses. Region-of-interest analyses evaluated neural signatures of gain and loss coding within a core network of brain areas known to be involved in valuation (anterior insula, caudate nucleus, ventromedial prefrontal cortex). Usable fMRI data were available for 19 MDD and 23 HC subjects. Anterior insula signal showed negative coding of losses (gain > loss) in HC subjects consistent with previous findings, whereas MDD subjects demonstrated significant reversals in these associations (loss > gain). Moreover, depression severity further enhanced the positive coding of losses in anterior insula, ventromedial prefrontal cortex, and caudate nucleus. The hyper-responsivity to losses displayed by the anterior insula of MDD patients was paralleled by a reduced influence of gain, but not loss, stake size on choice latencies. Patients with MDD demonstrate a significant shift from negative to positive coding of losses in the anterior insula, revealing the importance of this structure in value-based decision-making in the context of emotional disturbances.

  19. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  20. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  1. Flexible high speed codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.

    1992-01-01

    The project's objective is to develop an advanced high speed coding technology that provides substantial coding gains with limited bandwidth expansion for several common modulation types. The resulting technique is applicable to several continuous and burst communication environments. Decoding provides a significant gain with hard decisions alone and can utilize soft decision information when available from the demodulator to increase the coding gain. The hard decision codec will be implemented using a single application specific integrated circuit (ASIC) chip. It will be capable of coding and decoding as well as some formatting and synchronization functions at data rates up to 300 megabits per second (Mb/s). Code rate is a function of the block length and can vary from 7/8 to 15/16. Length of coded bursts can be any multiple of 32 that is greater than or equal to 256 bits. Coding may be switched in or out on a burst by burst basis with no change in the throughput delay. Reliability information in the form of 3-bit (8-level) soft decisions, can be exploited using applique circuitry around the hard decision codec. This applique circuitry will be discrete logic in the present contract. However, ease of transition to LSI is one of the design guidelines. Discussed here is the selected coding technique. Its application to some communication systems is described. Performance with 4, 8, and 16-ary Phase Shift Keying (PSK) modulation is also presented.

  2. Experience in highly parallel processing using DAP

    NASA Technical Reports Server (NTRS)

    Parkinson, D.

    1987-01-01

    Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.

  3. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA.

    PubMed

    Wright, Imogen A; Travers, Simon A

    2014-07-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. OpenGeoSys: Performance-Oriented Computational Methods for Numerical Modeling of Flow in Large Hydrogeological Systems

    NASA Astrophysics Data System (ADS)

    Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.

    2014-12-01

    OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.

  5. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  6. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  7. Summary of Pressure Gain Combustion Research at NASA

    NASA Technical Reports Server (NTRS)

    Perkins, H. Douglas; Paxson, Daniel E.

    2018-01-01

    NASA has undertaken a systematic exploration of many different facets of pressure gain combustion over the last 25 years in an effort to exploit the inherent thermodynamic advantage of pressure gain combustion over the constant pressure combustion process used in most aerospace propulsion systems. Applications as varied as small-scale UAV's, rotorcraft, subsonic transports, hypersonics and launch vehicles have been considered. In addition to studying pressure gain combustor concepts such as wave rotors, pulse detonation engines, pulsejets, and rotating detonation engines, NASA has studied inlets, nozzles, ejectors and turbines which must also process unsteady flow in an integrated propulsion system. Other design considerations such as acoustic signature, combustor material life and heat transfer that are unique to pressure gain combustors have also been addressed in NASA research projects. In addition to a wide range of experimental studies, a number of computer codes, from 0-D up through 3-D, have been developed or modified to specifically address the analysis of unsteady flow fields. Loss models have also been developed and incorporated into these codes that improve the accuracy of performance predictions and decrease computational time. These codes have been validated numerous times across a broad range of operating conditions, and it has been found that once validated for one particular pressure gain combustion configuration, these codes are readily adaptable to the others. All in all, the documentation of this work has encompassed approximately 170 NASA technical reports, conference papers and journal articles to date. These publications are very briefly summarized herein, providing a single point of reference for all of NASA's pressure gain combustion research efforts. This documentation does not include the significant contributions made by NASA research staff to the programs of other agencies, universities, industrial partners and professional society committees through serving as technical advisors, technical reviewers and research consultants.

  8. On transform coding tools under development for VP10

    NASA Astrophysics Data System (ADS)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  9. Functional interrogation of non-coding DNA through CRISPR genome editing

    PubMed Central

    Canver, Matthew C.; Bauer, Daniel E.; Orkin, Stuart H.

    2017-01-01

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. PMID:28288828

  10. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  11. Segregated and integrated coding of reward and punishment in the cingulate cortex.

    PubMed

    Fujiwara, Juri; Tobler, Philippe N; Taira, Masato; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2009-06-01

    Reward and punishment have opposite affective value but are both processed by the cingulate cortex. However, it is unclear whether the positive and negative affective values of monetary reward and punishment are processed by separate or common subregions of the cingulate cortex. We performed a functional magnetic resonance imaging study using a free-choice task and compared cingulate activations for different levels of monetary gain and loss. Gain-specific activation (increasing activation for increasing gain, but no activation change in relation to loss) occurred mainly in the anterior part of the anterior cingulate and in the posterior cingulate cortex. Conversely, loss-specific activation (increasing activation for increasing loss, but no activation change in relation to gain) occurred between these areas, in the middle and posterior part of the anterior cingulate. Integrated coding of gain and loss (increasing activation throughout the full range, from biggest loss to biggest gain) occurred in the dorsal part of the anterior cingulate, at the border with the medial prefrontal cortex. Finally, unspecific activation increases to both gains and losses (increasing activation to increasing gains and increasing losses, possibly reflecting attention) occurred in dorsal and middle regions of the cingulate cortex. Together, these results suggest separate and common coding of monetary reward and punishment in distinct subregions of the cingulate cortex. Further meta-analysis suggested that the presently found reward- and punishment-specific areas overlapped with those processing positive and negative emotions, respectively.

  12. Functional interrogation of non-coding DNA through CRISPR genome editing.

    PubMed

    Canver, Matthew C; Bauer, Daniel E; Orkin, Stuart H

    2017-05-15

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Calculation of the small scale self-focusing ripple gain spectrum for the CYCLOPS laser system: a status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleck, J.A. Jr.; Morris, J.R.; Thompson, P.F.

    1976-10-01

    The FLAC code (Fourier Laser Amplifier Code) was used to simulate the CYCLOPS laser system up to the third B-module and to calculate the maximum ripple gain spectrum. The model of this portion of CYCLOPS consists of 33 segments that correspond to 20 optical elements (simulation of the cell requires 2 segments and 12 external air spaces). (MHR)

  14. A novel design of optical CDMA system based on TCM and FFH

    NASA Astrophysics Data System (ADS)

    Fang, Jun-Bin; Xu, Zhi-Hai; Huang, Hong-bin; Zheng, Liming; Chen, Shun-er; Liu, Wei-ping

    2005-02-01

    For the application in Passive Optical Network (PON), a novel design of OCDMA system scheme is proposed in this paper. There are two key components included in this scheme: a new kind of OCDMA encoder/decoder system based on TCM and FFH and an improved Optical Line Terminal (OLT) receiving system with improved anti-interference performance by the use of Long Period Fiber Grating (LPFG). In the encoder/decoder system, Trellis Coded Modulation (TCM) encoder is applied in front of the FFH modulator. Original signal firstly is encoded through TCM encoder, and then the redundant code out of the TCM encoder will be mapped into one of the FFH modulation signal subsets for transmission. On the receiver (decoder) side, transmitting signal is demodulated through FFH and decoded by trellis decoder. Owing to the fact that high coding gain can be acquired by TCM without adding transmitting band and reducing transmitting speed, TCM is utilized to ameliorate bit error performance and reduce multi-user interference. In the OLT receiving system, EDFA and LPFG are placed in front of decoder to get excellent gain flatness on a large bandwidth, and Optical Hard Limiter (OHL) is also deployed to improve detection performance, through which the anti-interference performance of receiving system can be greatly enhanced. At the same time, some software is used to simulate the system performance for further analysis and authentication. The related work in this paper provides a valuable reference to the research.

  15. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  16. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  17. Video analysis for insight and coding: Examples from tutorials in introductory physics

    NASA Astrophysics Data System (ADS)

    Scherr, Rachel E.

    2009-12-01

    The increasing ease of video recording offers new opportunities to create richly detailed records of classroom activities. These recordings, in turn, call for research methodologies that balance generalizability with interpretive validity. This paper shares methodology for two practices of video analysis: (1) gaining insight into specific brief classroom episodes and (2) developing and applying a systematic observational protocol for a relatively large corpus of video data. These two aspects of analytic practice are illustrated in the context of a particular research interest but are intended to serve as general suggestions.

  18. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  19. Improved Gain Microstrip Patch Antenna

    DTIC Science & Technology

    2015-08-06

    08-2015 Publication Improved Gain Microstrip Patch Antenna David A. Tonn Naval Under Warfare Center Division, Newport 1176 Howell St., Code 00L...GAIN MICROSTRIP PATCH ANTENNA STATEMENT OF GOVERNMENT INTEREST [0001] The invention described herein may be manufactured and used by or for the...patch antenna having increased gain, and an apparatus for increasing the gain and bandwidth of an existing microstrip patch antenna . (2) Description

  20. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  1. Coding Gains for Rank Decoding

    DTIC Science & Technology

    1990-02-01

    PM PUB=C RERZASB DISThIDUnO UNLI M . U.S. ARMY LABORATORY COWMAND BALLISTIC RESEARCH LABORATORY ABERDEEN PROVING GROUND, MARYLAND 9o 03 is.032...Proving Ground, MD 21005-5066 ATITN: SLCBR-D Aberdeen Proving Ground, M 21005-5066 8a NAME OF FUNDING , SPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT...Previouseditionsare obsolete. SECURITY CLASSIFILATION OF THIS PAGE mm m ini IIIIIIIIIIIIIII I Isn FI E Contents 1 Soft Decision Concepts 1 2 Coding Gain 2 3

  2. Frame Synchronization Without Attached Sync Markers

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2011-01-01

    We describe a method to synchronize codeword frames without making use of attached synchronization markers (ASMs). Instead, the synchronizer identifies the code structure present in the received symbols, by operating the decoder for a handful of iterations at each possible symbol offset and forming an appropriate metric. This method is computationally more complex and doesn't perform as well as frame synchronizers that utilize an ASM; nevertheless, the new synchronizer acquires frame synchronization in about two seconds when using a 600 kbps software decoder, and would take about 15 milliseconds on prototype hardware. It also eliminates the need for the ASMs, which is an attractive feature for short uplink codes whose coding gain would be diminished by the overheard of ASM bits. The lack of ASMs also would simplify clock distribution for the AR4JA low-density parity-check (LDPC) codes and adds a small amount to the coding gain as well (up to 0.2 dB).

  3. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  4. Perspectives of the optical coherence tomography community on code and data sharing

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Mistree, Behram F. T.; Ellerbee, Audrey K.

    2015-03-01

    As optical coherence tomography (OCT) grows to be a mature and successful field, it is important for the research community to develop a stronger practice of sharing code and data. A prolific culture of sharing can enable new and emerging laboratories to enter the field, allow research groups to gain new exposure and notoriety, and enable benchmarking of new algorithms and methods. Our long-term vision is to build tools to facilitate a stronger practice of sharing within this community. In line with this goal, our first aim was to understand the perceptions and practices of the community with respect to sharing research contributions (i.e., as code and data). We surveyed 52 members of the OCT community using an online polling system. Our main findings indicate that while researchers infrequently share their code and data, they are willing to contribute their research resources to a shared repository, and they believe that such a repository would benefit both their research and the OCT community at large. We plan to use the results of this survey to design a platform targeted to the OCT research community - an effort that ultimately aims to facilitate a more prolific culture of sharing.

  5. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  6. Quantifying the mechanisms of domain gain in animal proteins.

    PubMed

    Buljan, Marija; Frankish, Adam; Bateman, Alex

    2010-01-01

    Protein domains are protein regions that are shared among different proteins and are frequently functionally and structurally independent from the rest of the protein. Novel domain combinations have a major role in evolutionary innovation. However, the relative contributions of the different molecular mechanisms that underlie domain gains in animals are still unknown. By using animal gene phylogenies we were able to identify a set of high confidence domain gain events and by looking at their coding DNA investigate the causative mechanisms. Here we show that the major mechanism for gains of new domains in metazoan proteins is likely to be gene fusion through joining of exons from adjacent genes, possibly mediated by non-allelic homologous recombination. Retroposition and insertion of exons into ancestral introns through intronic recombination are, in contrast to previous expectations, only minor contributors to domain gains and have accounted for less than 1% and 10% of high confidence domain gain events, respectively. Additionally, exonization of previously non-coding regions appears to be an important mechanism for addition of disordered segments to proteins. We observe that gene duplication has preceded domain gain in at least 80% of the gain events. The interplay of gene duplication and domain gain demonstrates an important mechanism for fast neofunctionalization of genes.

  7. Chopper-stabilized phase detector

    NASA Technical Reports Server (NTRS)

    Hopkins, P. M.

    1978-01-01

    Phase-detector circuit for binary-tracking loops and other binary-data acquisition systems minimizes effects of drift, gain imbalance, and voltage offset in detector circuitry. Input signal passes simultaneously through two channels where it is mixed with early and late codes that are alternately switched between channels. Code switching is synchronized with polarity switching of detector output of each channel so that each channel uses each detector for half time. Net result is that dc offset errors are canceled, and effect of gain imbalance is simply change in sensitivity.

  8. 26 CFR 1.341-7 - Certain sales of stock of consenting corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... section 341(f)(1) and (5)— (i) The term sale means a sale of exchange of stock at a gain, but only if such gain would be recognized as long-term capital gain were section 341 not a part of the Code. Thus, a... no gain on the transaction, or if the sale or exchange gives rise to ordinary income under a...

  9. 26 CFR 1.341-7 - Certain sales of stock of consenting corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... section 341(f)(1) and (5)— (i) The term sale means a sale of exchange of stock at a gain, but only if such gain would be recognized as long-term capital gain were section 341 not a part of the Code. Thus, a... no gain on the transaction, or if the sale or exchange gives rise to ordinary income under a...

  10. The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Shao, H. M.; Deutsch, L. J.

    1987-01-01

    The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.

  11. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  12. 26 CFR 1.341-7 - Certain sales of stock of consenting corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of section 341(f)(1) and (5)— (i) The term sale means a sale of exchange of stock at a gain, but only if such gain would be recognized as long-term capital gain were section 341 not a part of the Code... there is no gain on the transaction, or if the sale or exchange gives rise to ordinary income under a...

  13. 26 CFR 1.341-7 - Certain sales of stock of consenting corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of section 341(f)(1) and (5)— (i) The term sale means a sale of exchange of stock at a gain, but only if such gain would be recognized as long-term capital gain were section 341 not a part of the Code... there is no gain on the transaction, or if the sale or exchange gives rise to ordinary income under a...

  14. 26 CFR 1.341-7 - Certain sales of stock of consenting corporations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of section 341(f)(1) and (5)— (i) The term sale means a sale of exchange of stock at a gain, but only if such gain would be recognized as long-term capital gain were section 341 not a part of the Code... there is no gain on the transaction, or if the sale or exchange gives rise to ordinary income under a...

  15. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    NASA Astrophysics Data System (ADS)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  16. Quality of experience enhancement of high efficiency video coding video streaming in wireless packet networks using multiple description coding

    NASA Astrophysics Data System (ADS)

    Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled

    2018-01-01

    Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.

  17. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber.

    PubMed

    Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian

    2016-05-30

    Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.

  18. 78 FR 13401 - Proposed Collection; Comment Request For Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-27

    ... must generally file a gain recognition agreement with the IRS in order to defer gain on a Code section... for the proper performance of the functions of the agency, including whether the information shall...

  19. Enhanced decoding for the Galileo low-gain antenna mission: Viterbi redecoding with four decoding stages

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1995-01-01

    The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.

  20. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  1. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  2. 75 FR 5858 - Proposed Collection; Comment Request for Notice 97-64

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... (Applying Section 1(h) to Capital Gain Dividends of RICs and REITs). DATES: Written comments should be... Code (Applying Section 1(h) to Capital Gain Dividends of RICs and REITs). OMB Number: 1545-1565. Notice... capital gain dividends. Current Actions: There are no changes being made to the notice at this time. This...

  3. Optimal dynamic remapping of parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1987-01-01

    A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.

  4. Design of ACM system based on non-greedy punctured LDPC codes

    NASA Astrophysics Data System (ADS)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  5. New coding advances for deep space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.

    1987-01-01

    Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.

  6. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  7. Enhanced decoding for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1993-01-01

    A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.

  8. Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication

    NASA Astrophysics Data System (ADS)

    Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun

    2018-07-01

    A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.

  9. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  10. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  11. 49 CFR 178.905 - Large Packaging identification codes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Large Packaging identification codes. 178.905... FOR PACKAGINGS Large Packagings Standards § 178.905 Large Packaging identification codes. Large packaging code designations consist of: two numerals specified in paragraph (a) of this section; followed by...

  12. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Astrophysics Data System (ADS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  13. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  14. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE PAGES

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.; ...

    2017-07-31

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  15. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  16. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  17. Investigating the use of quick response codes in the gross anatomy laboratory.

    PubMed

    Traser, Courtney J; Hoffman, Leslie A; Seifert, Mark F; Wilson, Adam B

    2015-01-01

    The use of quick response (QR) codes within undergraduate university courses is on the rise, yet literature concerning their use in medical education is scant. This study examined student perceptions on the usefulness of QR codes as learning aids in a medical gross anatomy course, statistically analyzed whether this learning aid impacted student performance, and evaluated whether performance could be explained by the frequency of QR code usage. Question prompts and QR codes tagged on cadaveric specimens and models were available for four weeks as learning aids to medical (n = 155) and doctor of physical therapy (n = 39) students. Each QR code provided answers to posed questions in the form of embedded text or hyperlinked web pages. Students' perceptions were gathered using a formative questionnaire and practical examination scores were used to assess potential gains in student achievement. Overall, students responded positively to the use of QR codes in the gross anatomy laboratory as 89% (57/64) agreed the codes augmented their learning of anatomy. The users' most noticeable objection to using QR codes was the reluctance to bring their smartphones into the gross anatomy laboratory. A comparison between the performance of QR code users and non-users was found to be nonsignificant (P = 0.113), and no significant gains in performance (P = 0.302) were observed after the intervention. Learners welcomed the implementation of QR code technology in the gross anatomy laboratory, yet this intervention had no apparent effect on practical examination performance. © 2014 American Association of Anatomists.

  18. 76 FR 66012 - Partner's Distributive Share

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    [email protected] . SUPPLEMENTARY INFORMATION: Background Subchapter K is intended to permit taxpayers to... Revenue Code provides that a partner's distributive share of income, gain, loss, deduction, or credit... that a partner's distributive share of income, gain, loss, deduction, or credit (or item thereof) shall...

  19. Multiple Trellis Coded Modulation (MTCM): An MSAT-X report

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1986-01-01

    Conventional trellis coding outputs one channel symbol per trellis branch. The notion of multiple trellis coding is introduced wherein more than one channel symbol per trellis branch is transmitted. It is shown that the combination of multiple trellis coding with M-ary modulation yields a performance gain with symmetric signal set comparable to that previously achieved only with signal constellation asymmetry. The advantage of multiple trellis coding over the conventional trellis coded asymmetric modulation technique is that the potential for code catastrophe associated with the latter has been eliminated with no additional cost in complexity (as measured by the number of states in the trellis diagram).

  20. The Educational and Moral Significance of the American Chemical Society's The Chemist's Code of Conduct

    NASA Astrophysics Data System (ADS)

    Bruton, Samuel V.

    2003-05-01

    While the usefulness of the case study method in teaching research ethics is frequently emphasized, less often noted is the educational value of professional codes of ethics. Much can be gained by having students examine codes and reflect on their significance. This paper argues that codes such as the American Chemical Society‘s The Chemist‘s Code of Conduct are an important supplement to the use of cases and describes one way in which they can be integrated profitably into a class discussion of research ethics.

  1. Toward Ultraintense Compact RBS Pump for Recombination 3.4 nm Laser via OFI

    NASA Astrophysics Data System (ADS)

    Suckewer, S.; Ren, J.; Li, S.; Lou, Y.; Morozov, A.; Turnbull, D.; Avitzour, Y.

    In our presentation we overview progress we made in developing a new ultrashort and ultraintensive laser system based on Raman backscattering (RBS) amplifier /compressor from time of 10th XRL Conference in Berlin to present time of 11th XRL Conference in Belfast. One of the main objectives of RBS laser system development is to use it for pumping of recombination X-ray laser in transition to ground state of CVI ions at 3.4 nm. Using elaborate computer code the processes of Optical Field Ionization, electron energy distribution, and recombination were calculated. It was shown that in very earlier stage of recombination, when electron energy distribution is strongly non-Maxwellian, high gain in transition from the first excited level n=2 to ground level m=1 can be generated. Adding large amount of hydrogen gas into initial gas containing carbon atoms (e.g. methane, CH4) the calculated gain has reached values up to 150-200 cm-2 Taking into account this very encouraging result, we have proceed with arrangement of experimental setup. We will present the observation of plasma channels and measurements of electron density distribution required for generation of gain at 3.4 nm.

  2. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  3. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  4. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  5. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  6. CubiCal: Suite for fast radio interferometric calibration

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.

  7. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE PAGES

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  8. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  9. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  10. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  11. Measurement Techniques for Clock Jitter

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  12. Investigation of the Use of Erasures in a Concatenated Coding Scheme

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Marriott, Philip J.

    1997-01-01

    A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.

  13. Code Sharing and Collaboration: Experiences from the Scientist's Expert Assistant Project and their Relevance to the Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)

    2001-01-01

    In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing between groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for SOFIA, the SIRTF planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, defacto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA - both successes and failures - and offer some lessons learned that may promote further successes in collaboration and re-use.

  14. Code Sharing and Collaboration: Experiences From the Scientist's Expert Assistant Project and Their Relevance to the Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Korathkar, Anuradha; Grosvenor, Sandy; Jones, Jeremy; Li, Connie; Mackey, Jennifer; Neher, Ken; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing among groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for the SIRTF (Space Infrared Telescope Facility) planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, de facto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA--both successes and failures, and offer some lessons learned that might promote further successes in collaboration and re-use.

  15. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  16. MHD thrust vectoring of a rocket engine

    NASA Astrophysics Data System (ADS)

    Labaune, Julien; Packan, Denis; Tholin, Fabien; Chemartin, Laurent; Stillace, Thierry; Masson, Frederic

    2016-09-01

    In this work, the possibility to use MagnetoHydroDynamics (MHD) to vectorize the thrust of a solid propellant rocket engine exhaust is investigated. Using a magnetic field for vectoring offers a mass gain and a reusability advantage compared to standard gimbaled, elastomer-joint systems. Analytical and numerical models were used to evaluate the flow deviation with a 1 Tesla magnetic field inside the nozzle. The fluid flow in the resistive MHD approximation is calculated using the KRONOS code from ONERA, coupling the hypersonic CFD platform CEDRE and the electrical code SATURNE from EDF. A critical parameter of these simulations is the electrical conductivity, which was evaluated using a set of equilibrium calculations with 25 species. Two models were used: local thermodynamic equilibrium and frozen flow. In both cases, chlorine captures a large fraction of free electrons, limiting the electrical conductivity to a value inadequate for thrust vectoring applications. However, when using chlorine-free propergols with 1% in mass of alkali, an MHD thrust vectoring of several degrees was obtained.

  17. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  18. Multidimensional modulation for next-generation transmission systems

    NASA Astrophysics Data System (ADS)

    Millar, David S.; Koike-Akino, Toshiaki; Kojima, Keisuke; Parsons, Kieran

    2017-01-01

    Recent research in multidimensional modulation has shown great promise in long reach applications. In this work, we will investigate the origins of this gain, the different approaches to multidimensional constellation design, and different performance metrics for coded modulation. We will also discuss the reason that such coded modulation schemes seem to have limited application at shorter distances, and the potential for other coded modulation schemes in future transmission systems.

  19. Independent evolution of genomic characters during major metazoan transitions.

    PubMed

    Simakov, Oleg; Kawashima, Takeshi

    2017-07-15

    Metazoan evolution encompasses a vast evolutionary time scale spanning over 600 million years. Our ability to infer ancestral metazoan characters, both morphological and functional, is limited by our understanding of the nature and evolutionary dynamics of the underlying regulatory networks. Increasing coverage of metazoan genomes enables us to identify the evolutionary changes of the relevant genomic characters such as the loss or gain of coding sequences, gene duplications, micro- and macro-synteny, and non-coding element evolution in different lineages. In this review we describe recent advances in our understanding of ancestral metazoan coding and non-coding features, as deduced from genomic comparisons. Some genomic changes such as innovations in gene and linkage content occur at different rates across metazoan clades, suggesting some level of independence among genomic characters. While their contribution to biological innovation remains largely unclear, we review recent literature about certain genomic changes that do correlate with changes to specific developmental pathways and metazoan innovations. In particular, we discuss the origins of the recently described pharyngeal cluster which is conserved across deuterostome genomes, and highlight different genomic features that have contributed to the evolution of this group. We also assess our current capacity to infer ancestral metazoan states from gene models and comparative genomics tools and elaborate on the future directions of metazoan comparative genomics relevant to evo-devo studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  1. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  2. 26 CFR 1.1012-1 - Basis of property.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (relating to gain or loss on the disposition of property), subchapter C (relating to corporate distributions and adjustments), subchapter K (relating to partners and partnerships), and subchapter P (relating to capital gains and losses), chapter 1 of the code. (b) Real estate taxes as part of cost. In computing the...

  3. 26 CFR 1.755-1 - Rules for allocation of basis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1...) property (capital gain property), and any other property of the partnership (ordinary income property). For purposes of this section, properties and potential gain treated as unrealized receivables under section 751...

  4. 26 CFR 1.755-1 - Rules for allocation of basis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1...) property (capital gain property), and any other property of the partnership (ordinary income property). For purposes of this section, properties and potential gain treated as unrealized receivables under section 751...

  5. 26 CFR 1.1012-1 - Basis of property.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (relating to gain or loss on the disposition of property), subchapter C (relating to corporate distributions and adjustments), subchapter K (relating to partners and partnerships), and subchapter P (relating to capital gains and losses), chapter 1 of the code. (b) Real estate taxes as part of cost. In computing the...

  6. 26 CFR 1.755-1 - Rules for allocation of basis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1...) property (capital gain property), and any other property of the partnership (ordinary income property). For purposes of this section, properties and potential gain treated as unrealized receivables under section 751...

  7. 26 CFR 1.755-1 - Rules for allocation of basis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (CONTINUED) INCOME TAXES (CONTINUED) Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1...) property (capital gain property), and any other property of the partnership (ordinary income property). For purposes of this section, properties and potential gain treated as unrealized receivables under section 751...

  8. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  9. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  10. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    NASA Astrophysics Data System (ADS)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  11. Learning Gains from a Recurring "Teach and Question" Homework Assignment in a General Biology Course: Using Reciprocal Peer Tutoring Outside Class.

    PubMed

    Bailey, E G; Baek, D; Meiling, J; Morris, C; Nelson, N; Rice, N S; Rose, S; Stockdale, P

    2018-06-01

    Providing students with one-on-one interaction with instructors is a big challenge in large courses. One solution is to have students interact with their peers during class. Reciprocal peer tutoring (RPT) is a more involved interaction that requires peers to alternate the roles of "teacher" and "student." Theoretically, advantages for peer tutoring include the verbalization and questioning of information and the scaffolded exploration of material through social and cognitive interaction. Studies on RPT vary in their execution, but most require elaborate planning and take up valuable class time. We tested the effectiveness of a "teach and question" (TQ) assignment that required student pairs to engage in RPT regularly outside class. A quasi-experimental design was implemented: one section of a general biology course completed TQ assignments, while another section completed a substitute assignment requiring individuals to review course material. The TQ section outperformed the other section by ∼6% on exams. Session recordings were coded to investigate correlation between TQ quality and student performance. Asking more questions was the characteristic that best predicted exam performance, and this was more predictive than most aspects of the course. We propose the TQ as an easy assignment to implement with large performance gains.

  12. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  13. Long Non-Coding RNAs in Multiple Myeloma

    PubMed Central

    Ronchetti, Domenica; Taiana, Elisa; Vinci, Cristina; Neri, Antonino

    2018-01-01

    Multiple myeloma (MM) is an incurable disease caused by the malignant proliferation of bone marrow plasma cells, whose pathogenesis remains largely unknown. Although a large fraction of the genome is actively transcribed, most of the transcripts do not serve as templates for proteins and are referred to as non-coding RNAs (ncRNAs), broadly divided into short and long transcripts on the basis of a 200-nucleotide threshold. Short ncRNAs, especially microRNAs, have crucial roles in virtually all types of cancer, including MM, and have gained importance in cancer diagnosis and prognosis, predicting the response to therapy and, notably, as innovative therapeutic targets. Long ncRNAs (lncRNAs) are a very heterogeneous group, involved in many physiological cellular and genomic processes as well as in carcinogenesis, cancer metastasis, and invasion. LncRNAs are aberrantly expressed in various types of cancers, including hematological malignancies, showing either oncogenic or tumor suppressive functions. However, the mechanisms of the related disease-causing events are not yet revealed in most cases. Besides emerging as key players in cancer initiation and progression, lncRNAs own many interesting features as biomarkers with diagnostic and prognostic importance and, possibly, for their utility in therapeutic terms as druggable molecules. This review focuses on the role of lncRNAs in the pathogenesis of MM and summarizes the recent literature. PMID:29389884

  14. Coding for spread spectrum packet radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1980-01-01

    Packet radios are often expected to operate in a radio communication network environment where there tends to be man made interference signals. To combat such interference, spread spectrum waveforms are being considered for some applications. The use of convolutional coding with Viterbi decoding to further improve the performance of spread spectrum packet radios is examined. At 0.00001 bit error rates, improvements in performance of 4 db to 5 db can easily be achieved with such coding without any change in data rate nor spread spectrum bandwidth. This coding gain is more dramatic in an interference environment.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lougovski, P.; Uskov, D. B.

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  16. Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.

  17. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  18. Large-eddy Simulation of Stratocumulus-topped Atmospheric Boundary Layers with Dynamic Subgrid-scale Models

    NASA Technical Reports Server (NTRS)

    Senocak, Inane

    2003-01-01

    The objective of the present study is to evaluate the dynamic procedure in LES of stratocumulus topped atmospheric boundary layer and assess the relative importance of subgrid-scale modeling, cloud microphysics and radiation modeling on the predictions. The simulations will also be used to gain insight into the processes leading to cloud top entrainment instability and cloud breakup. In this report we document the governing equations, numerical schemes and physical models that are employed in the Goddard Cumulus Ensemble model (GCEM3D). We also present the subgrid-scale dynamic procedures that have been implemented in the GCEM3D code for the purpose of the present study.

  19. Inline CBET Model Including SRS Backscatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David S.

    2015-06-26

    Cross-beam energy transfer (CBET) has been used as a tool on the National Ignition Facility (NIF) since the first energetics experiments in 2009 to control the energy deposition in ignition hohlraums and tune the implosion symmetry. As large amounts of power are transferred between laser beams at the entrance holes of NIF hohlraums, the presence of many overlapping beat waves can lead to stochastic ion heating in the regions where laser beams overlap [P. Michel et al., Phys. Rev. Lett. 109, 195004 (2012)]. Using the CBET gains derived in this paper, we show how to implement these equations in amore » ray-based laser source for a rad-hydro code.« less

  20. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  1. Debating Sex: Education Films and Sexual Morality for the Young in Post-War Germany, 1945-1955.

    PubMed

    Winkler, Anita

    2015-01-01

    After 1945 rapidly climbing figures of venereal disease infections menaced the health of the war-ridden German population. Physicians sought to gain control over this epidemic and initiated large-scale sex education campaigns to inform people about identification, causes and treatment of VD and advised them on appropriate moral sexual behaviour as a prophylactic measure. Film played a crucial role in these campaigns. As mass medium it was believed film could reach out to large parts of society and quickly disseminate sexual knowledge and moral codes of conduct amongst the population. This essay discusses the transition of the initial central role of sex education films in the fight against venereal disease in the immediate post-war years towards a more critical stance as to the effects of cinematographic education of the young in an East and West German context.

  2. Wave rotor-enhanced gas turbine engines

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.; Scott, Jones M.; Paxson, Daniel E.

    1995-01-01

    The benefits of wave rotor-topping in small (400 to 600 hp-class) and intermediate (3000 to 4000 hp-class) turboshaft engines, and large (80,000 to 100,000 lb(sub f)-class) high bypass ratio turbofan engines are evaluated. Wave rotor performance levels are calculated using a one-dimensional design/analysis code. Baseline and wave rotor-enhanced engine performance levels are obtained from a cycle deck in which the wave rotor is represented as a burner with pressure gain. Wave rotor-toppings is shown to significantly enhance the specific fuel consumption and specific power of small and intermediate size turboshaft engines. The specific fuel consumption of the wave rotor-enhanced large turbofan engine can be reduced while operating at significantly reduced turbine inlet temperature. The wave rotor-enhanced engine is shown to behave off-design like a conventional engine. Discussion concerning the impact of the wave rotor/gas turbine engine integration identifies tenable technical challenges.

  3. Debating Sex: Education Films and Sexual Morality for the Young in post-War Germany, 1945-55

    PubMed Central

    Winkler, Anita

    2015-01-01

    Summary After 1945 rapidly climbing figures of venereal disease infections menaced the health of the war-ridden German population. Physicians sought to gain control over this epidemic and initiated large-scale sex education campaigns to inform people about identification, causes and treatment of VD and advised them on appropriate moral sexual behaviour as a prophylactic measure. Film played a crucial role in these campaigns. As mass medium it was believed film could reach out to large parts of society and quickly disseminate sexual knowledge and moral codes of conduct amongst the population. This essay discusses the transition of the initial central role of sex education films in the fight against venereal disease in the immediate post-war years towards a more critical stance as to the effects of cinematographic education of the young in an East and West German context. PMID:26403056

  4. Three-dimensional inviscid analysis of radial-turbine flow and a limited comparison with experimental data

    NASA Technical Reports Server (NTRS)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  5. Three-dimensional inviscid analysis of radial turbine flow and a limited comparison with experimental data

    NASA Technical Reports Server (NTRS)

    Choo, Y. K.; Civinskas, K. C.

    1985-01-01

    The three-dimensional inviscid DENTON code is used to analyze flow through a radial-inflow turbine rotor. Experimental data from the rotor are compared with analytical results obtained by using the code. The experimental data available for comparison are the radial distributions of circumferentially averaged values of absolute flow angle and total pressure downstream of the rotor exit. The computed rotor-exit flow angles are generally underturned relative to the experimental values, which reflect the boundary-layer separation at the trailing edge and the development of wakes downstream of the rotor. The experimental rotor is designed for a higher-than-optimum work factor of 1.126 resulting in a nonoptimum positive incidence and causing a region of rapid flow adjustment and large velocity gradients. For this experimental rotor, the computed radial distribution of rotor-exit to turbine-inlet total pressure ratios are underpredicted due to the errors in the finite-difference approximations in the regions of rapid flow adjustment, and due to using the relatively coarser grids in the middle of the blade region where the flow passage is highly three-dimensional. Additional results obtained from the three-dimensional inviscid computation are also presented, but without comparison due to the lack of experimental data. These include quasi-secondary velocity vectors on cross-channel surfaces, velocity components on the meridional and blade-to-blade surfaces, and blade surface loading diagrams. Computed results show the evolution of a passage vortex and large streamline deviations from the computational streamwise grid lines. Experience gained from applying the code to a radial turbine geometry is also discussed.

  6. 75 FR 34388 - Employee Contribution Elections and Contribution Allocations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-17

    ...-sector employees under section 401(k) of the Internal Revenue Code (26 U.S.C. 401(k)). On June 22, 2009... made under the automatic enrollment program (adjusted for allocable gains and losses through the date... employee contributions (adjusted for allocable gains and losses). (3) Processing of refunds will be subject...

  7. 26 CFR 1.752-3 - Partner's share of nonrecourse liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) INCOME TAX (CONTINUED) INCOME TAXES Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1...) The partner's share of partnership minimum gain determined in accordance with the rules of section 704(b) and the regulations thereunder; (2) The amount of any taxable gain that would be allocated to the...

  8. 26 CFR 1.755-1 - Rules for allocation of basis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (CONTINUED) INCOME TAXES Provisions Common to Part II, Subchapter K, Chapter 1 of the Code § 1.755-1 Rules...) property (capital gain property), and any other property of the partnership (ordinary income property). For purposes of this section, properties and potential gain treated as unrealized receivables under section 751...

  9. Space coding for sensorimotor transformations can emerge through unsupervised learning.

    PubMed

    De Filippo De Grazia, Michele; Cutini, Simone; Lisi, Matteo; Zorzi, Marco

    2012-08-01

    The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3°. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands.

  10. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  11. A low-noise wide-dynamic-range event-driven detector using SOI pixel technology for high-energy particle imaging

    NASA Astrophysics Data System (ADS)

    Shrestha, Sumeet; Kamehama, Hiroki; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Takeda, Ayaki; Tsuru, Takeshi Go; Arai, Yasuo

    2015-08-01

    This paper presents a low-noise wide-dynamic-range pixel design for a high-energy particle detector in astronomical applications. A silicon on insulator (SOI) based detector is used for the detection of wide energy range of high energy particles (mainly for X-ray). The sensor has a thin layer of SOI CMOS readout circuitry and a thick layer of high-resistivity detector vertically stacked in a single chip. Pixel circuits are divided into two parts; signal sensing circuit and event detection circuit. The event detection circuit consisting of a comparator and logic circuits which detect the incidence of high energy particle categorizes the incident photon it into two energy groups using an appropriate energy threshold and generate a two-bit code for an event and energy level. The code for energy level is then used for selection of the gain of the in-pixel amplifier for the detected signal, providing a function of high-dynamic-range signal measurement. The two-bit code for the event and energy level is scanned in the event scanning block and the signals from the hit pixels only are read out. The variable-gain in-pixel amplifier uses a continuous integrator and integration-time control for the variable gain. The proposed design allows the small signal detection and wide dynamic range due to the adaptive gain technique and capability of correlated double sampling (CDS) technique of kTC noise canceling of the charge detector.

  12. Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj

    2007-09-01

    This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).

  13. Polarized skylight navigation in insects: model and electrophysiology of e-vector coding by neurons in the central complex.

    PubMed

    Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas

    2008-02-01

    Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.

  14. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Chau, J. L.; Vierinen, J.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2016-12-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products, such as wind fields. This type of a radar would also be useful for over-the-horizon radar, ionosondes, and observations of field-aligned-irregularities.

  15. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    NASA Astrophysics Data System (ADS)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  16. Some User's Insights Into ADIFOR 2.0D

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.

    2002-01-01

    Some insights are given which were gained by one user through experience with the use of the ADIFOR 2.0D software for automatic differentiation of Fortran code. These insights are generally in the area of the user interface with the generated derivative code - particularly the actual form of the interface and the use of derivative objects, including "seed" matrices. Some remarks are given as to how to iterate application of ADIFOR in order to generate second derivative code.

  17. Improve load balancing and coding efficiency of tiles in high efficiency video coding by adaptive tile boundary

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin

    2017-01-01

    High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.

  18. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    PubMed

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  19. Subspace-Aware Index Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  20. Subspace-Aware Index Codes

    DOE PAGES

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    2017-04-12

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  1. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  2. Teacher Candidates Implementing Universal Design for Learning: Enhancing Picture Books with QR Codes

    ERIC Educational Resources Information Center

    Grande, Marya; Pontrello, Camille

    2016-01-01

    The purpose of this study was to investigate if teacher candidates could gain knowledge of the principles of Universal Design for Learning by enhancing traditional picture books with Quick Response (QR) codes and to determine if the process of making these enhancements would impact teacher candidates' comfort levels with using technology on both…

  3. A TDM link with channel coding and digital voice.

    NASA Technical Reports Server (NTRS)

    Jones, M. W.; Tu, K.; Harton, P. L.

    1972-01-01

    The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.

  4. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  5. Combined coding and delay-throughput analysis for fading channels of mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Yan, Tsun-Yee

    1986-01-01

    This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.

  6. CFD Modelling of Bore Erosion in Two-Stage Light Gas Guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.

    1998-01-01

    A well-validated quasi-one-dimensional computational fluid dynamics (CFD) code for the analysis of the internal ballistics of two-stage light gas guns is modified to explicitly calculate the ablation of steel from the gun bore and the incorporation of the ablated wall material into the hydrogen working cas. The modified code is used to model 45 shots made with the NASA Ames 0.5 inch light gas gun over an extremely wide variety of gun operating conditions. Good agreement is found between the experimental and theoretical piston velocities (maximum errors of +/-2% to +/-6%) and maximum powder pressures (maximum errors of +/-10% with good igniters). Overall, the agreement between the experimental and numerically calculated gun erosion values (within a factor of 2) was judged to be reasonably good, considering the complexity of the processes modelled. Experimental muzzle velocities agree very well (maximum errors of 0.5-0.7 km/sec) with theoretical muzzle velocities calculated with loading of the hydrogen gas with the ablated barrel wall material. Comparison of results for pump tube volumes of 100%, 60% and 40% of an initial benchmark value show that, at the higher muzzle velocities, operation at 40% pump tube volume produces much lower hydrogen loading and gun erosion and substantially lower maximum pressures in the gun. Large muzzle velocity gains (2.4-5.4 km/sec) are predicted upon driving the gun harder (that is, upon using, higher powder loads and/or lower hydrogen fill pressures) when hydrogen loading is neglected; much smaller muzzle velocity gains (1.1-2.2 km/sec) are predicted when hydrogen loading is taken into account. These smaller predicted velocity gains agree well with those achieved in practice. CFD snapshots of the hydrogen mass fraction, density and pressure of the in-bore medium are presented for a very erosive shot.

  7. Identification of BSAP (Pax-5) target genes in early B-cell development by loss- and gain-of-function experiments.

    PubMed Central

    Nutt, S L; Morrison, A M; Dörfler, P; Rolink, A; Busslinger, M

    1998-01-01

    The Pax-5 gene codes for the transcription factor BSAP which is essential for the progression of adult B lymphopoiesis beyond an early progenitor (pre-BI) cell stage. Although several genes have been proposed to be regulated by BSAP, CD19 is to date the only target gene which has been genetically confirmed to depend on this transcription factor for its expression. We have now taken advantage of cultured pre-BI cells of wild-type and Pax-5 mutant bone marrow to screen a large panel of B lymphoid genes for additional BSAP target genes. Four differentially expressed genes were shown to be under the direct control of BSAP, as their expression was rapidly regulated in Pax-5-deficient pre-BI cells by a hormone-inducible BSAP-estrogen receptor fusion protein. The genes coding for the B-cell receptor component Ig-alpha (mb-1) and the transcription factors N-myc and LEF-1 are positively regulated by BSAP, while the gene coding for the cell surface protein PD-1 is efficiently repressed. Distinct regulatory mechanisms of BSAP were revealed by reconstituting Pax-5-deficient pre-BI cells with full-length BSAP or a truncated form containing only the paired domain. IL-7 signalling was able to efficiently induce the N-myc gene only in the presence of full-length BSAP, while complete restoration of CD19 synthesis was critically dependent on the BSAP protein concentration. In contrast, the expression of the mb-1 and LEF-1 genes was already reconstituted by the paired domain polypeptide lacking any transactivation function, suggesting that the DNA-binding domain of BSAP is sufficient to recruit other transcription factors to the regulatory regions of these two genes. In conclusion, these loss- and gain-of-function experiments demonstrate that BSAP regulates four newly identified target genes as a transcriptional activator, repressor or docking protein depending on the specific regulatory sequence context. PMID:9545244

  8. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  9. Genetic algorithm based active vibration control for a moving flexible smart beam driven by a pneumatic rod cylinder

    NASA Astrophysics Data System (ADS)

    Qiu, Zhi-cheng; Shi, Ming-li; Wang, Bin; Xie, Zhuo-wei

    2012-05-01

    A rod cylinder based pneumatic driving scheme is proposed to suppress the vibration of a flexible smart beam. Pulse code modulation (PCM) method is employed to control the motion of the cylinder's piston rod for simultaneous positioning and vibration suppression. Firstly, the system dynamics model is derived using Hamilton principle. Its standard state-space representation is obtained for characteristic analysis, controller design, and simulation. Secondly, a genetic algorithm (GA) is applied to optimize and tune the control gain parameters adaptively based on the specific performance index. Numerical simulations are performed on the pneumatic driving elastic beam system, using the established model and controller with tuned gains by GA optimization process. Finally, an experimental setup for the flexible beam driven by a pneumatic rod cylinder is constructed. Experiments for suppressing vibrations of the flexible beam are conducted. Theoretical analysis, numerical simulation and experimental results demonstrate that the proposed pneumatic drive scheme and the adopted control algorithms are feasible. The large amplitude vibration of the first bending mode can be suppressed effectively.

  10. Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation

    PubMed Central

    Brostek, Lukas; Büttner, Ulrich; Mustari, Michael J.; Glasauer, Stefan

    2015-01-01

    Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR. PMID:24557636

  11. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE PAGES

    Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...

    2017-05-23

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  12. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piro, M.H.A; Wassermann, F.; Grundmann, S.

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  13. Development of a large area microstructure photomultiplier assembly (LAMPA)

    NASA Astrophysics Data System (ADS)

    Clifford, E. T. H.; Dick, M.; Facina, M.; Wakeford, D.; Andrews, H. R.; Ing, H.; Best, D.; Baginski, M. J.

    2017-05-01

    Large area (> m2) position-sensitive readout of scintillators is important for passive/active gamma and neutron imaging for counter-terrorism applications. The goal of the LAMPA project is to provide a novel, affordable, large-area photodetector (8" x 8") by replacing the conventional dynodes of photomultiplier tubes (PMTs) with electron multiplier microstructure boards (MSBs) that can be produced using industrial manufacturing techniques. The square, planar format of the LAMPA assemblies enables tiling of multiple units to support large area applications. The LAMPA performance objectives include comparable gain, noise, timing, and energy resolution relative to conventional PMTs, as well as spatial resolution in the few mm range. The current LAMPA prototype is a stack of 8" x 8" MSBs made commercially by chemical etching of a molybdenum substrate and coated with hydrogen-terminated boron-doped diamond for high secondary emission yield (SEY). The layers of MSBs are electrically isolated using ceramic standoffs. Field-shaping grids are located between adjacent boards to achieve good transmission of electrons from one board to the next. The spacing between layers and the design of the microstructure pattern and grids were guided by simulations performed using an electro-optics code. A position sensitive anode board at the back of the stack of MSBs provides 2-D readout. This presentation discusses the trade studies performed in the design of the MSBs, the measurements of SEY from various electro-emissive materials, the electro-optics simulations conducted, the design of the 2-D readout, and the mechanical aspects of the LAMPA design, in order to achieve a gain of > 104 in an 8-stage stack of MSBs, suitable for use with various scintillators when coupled to an appropriate photocathode.

  14. High performance and cost effective CO-OFDM system aided by polar code.

    PubMed

    Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng

    2017-02-06

    A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.

  15. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  16. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  17. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  18. Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Lewis, Robert R.

    2011-11-30

    Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less

  19. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  20. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  1. ISBT 128 Standard for Coding Medical Products of Human Origin

    PubMed Central

    Ashford, Paul; Delgado, Matthew

    2017-01-01

    Background ISBT 128 is an international standard for the terminology, coding, labeling, and identification of medical products of human origin (MPHO). Full implementation of ISBT 128 improves traceability, transparency, vigilance and surveillance, and interoperability. Methods ICCBBA maintains the ISBT 128 standard through the activities of a network of expert volunteers, including representatives from professional scientific societies, governments and users, to standardize and maintain MPHO identification. These individuals are organized into Technical Advisory Groups and work within a structured framework as part of a quality-controlled standards development process. Results The extensive involvement of international scientific and professional societies in the development of the standard has ensured that ISBT 128 has gained widespread recognition. The user community has developed confidence in the ability of the standard to adapt to new developments in their fields of interest. The standard is fully compatible with Single European Code requirements for tissues and cells and is utilized by many European tissue establishments. ISBT 128's flexibility and robustness has allowed for expansions into subject areas such as cellular therapy, regenerative medicine, and tissue banking. Conclusion ISBT 128 is the internationally recognized standard for coding MPHO and has gained widespread use globally throughout the past two decades. PMID:29344013

  2. A large-scale video codec comparison of x264, x265 and libvpx for practical VOD applications

    NASA Astrophysics Data System (ADS)

    De Cock, Jan; Mavlankar, Aditya; Moorthy, Anush; Aaron, Anne

    2016-09-01

    Over the last years, we have seen exciting improvements in video compression technology, due to the introduction of HEVC and royalty-free coding specifications such as VP9. The potential compression gains of HEVC over H.264/AVC have been demonstrated in different studies, and are usually based on the HM reference software. For VP9, substantial gains over H.264/AVC have been reported in some publications, whereas others reported less optimistic results. Differences in configurations between these publications make it more difficult to assess the true potential of VP9. Practical open-source encoder implementations such as x265 and libvpx (VP9) have matured, and are now showing high compression gains over x264. In this paper, we demonstrate the potential of these encoder imple- mentations, with settings optimized for non-real-time random access, as used in a video-on-demand encoding pipeline. We report results from a large-scale video codec comparison test, which includes x264, x265 and libvpx. A test set consisting of a variety of titles with varying spatio-temporal characteristics from our catalog is used, resulting in tens of millions of encoded frames, hence larger than test sets previously used in the literature. Re- sults are reported in terms of PSNR, SSIM, MS-SSIM, VIF and the recently introduced VMAF quality metric. BD-rate calculations show that using x265 and libvpx vs. x264 can lead to significant bitrate savings for the same quality. x265 outperforms libvpx in most cases, but the performance gap narrows (or even reverses) at the higher resolutions.

  3. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  4. Construction of a new regular LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng

    2013-05-01

    A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.

  5. Optimizing the Galileo space communication link

    NASA Technical Reports Server (NTRS)

    Statman, J. I.

    1994-01-01

    The Galileo mission was originally designed to investigate Jupiter and its moons utilizing a high-rate, X-band (8415 MHz) communication downlink with a maximum rate of 134.4 kb/sec. However, following the failure of the high-gain antenna (HGA) to fully deploy, a completely new communication link design was established that is based on Galileo's S-band (2295 MHz), low-gain antenna (LGA). The new link relies on data compression, local and intercontinental arraying of antennas, a (14,1/4) convolutional code, a (255,M) variable-redundancy Reed-Solomon code, decoding feedback, and techniques to reprocess recorded data to greatly reduce data losses during signal acquisition. The combination of these techniques will enable return of significant science data from the mission.

  6. Patient-physician trust: an exploratory study.

    PubMed

    Thom, D H; Campbell, B

    1997-02-01

    Patients' trust in their physicians has recently become a focus of concern, largely owing to the rise of managed care, yet the subject remains largely unstudied. We undertook a qualitative research study of patients' self-reported experiences with trust in a physician to gain further understanding of the components of trust in the context of the patient-physician relationship. Twenty-nine patients participants, aged 26 to 72, were recruited from three diverse practice sites. Four focus groups, each lasting 1.5 to 2 hours, were conducted to explore patients' experiences with trust. Focus groups were audio-recorded, transcribed, and coded by four readers, using principles of grounded theory. The resulting consensus codes were grouped into seven categories of physician behavior, two of which related primarily to technical competence (thoroughness in evaluation and providing appropriate and effective treatment) and five of which were interpersonal (understanding patient's individual experience, expressing caring, communicating clearly and completely, building partnership/sharing power and honesty/respect for patient). Two additional categories were predisposing factors and structural/staffing factors. Each major category had multiple subcategories. Specific examples from each major category are provided. These nine categories of physician behavior encompassed the trust experiences related by the 29 patients. These categories and the specific examples provided by patients provide insights into the process of trust formation and suggest ways in which physicians could be more effective in building and maintaining trust.

  7. Identification and Characterization of Long Non-Coding RNAs Related to Mouse Embryonic Brain Development from Available Transcriptomic Data

    PubMed Central

    He, Hongjuan; Xiu, Youcheng; Guo, Jing; Liu, Hui; Liu, Qi; Zeng, Tiebo; Chen, Yan; Zhang, Yan; Wu, Qiong

    2013-01-01

    Long non-coding RNAs (lncRNAs) as a key group of non-coding RNAs have gained widely attention. Though lncRNAs have been functionally annotated and systematic explored in higher mammals, few are under systematical identification and annotation. Owing to the expression specificity, known lncRNAs expressed in embryonic brain tissues remain still limited. Considering a large number of lncRNAs are only transcribed in brain tissues, studies of lncRNAs in developmental brain are therefore of special interest. Here, publicly available RNA-sequencing (RNA-seq) data in embryonic brain are integrated to identify thousands of embryonic brain lncRNAs by a customized pipeline. A significant proportion of novel transcripts have not been annotated by available genomic resources. The putative embryonic brain lncRNAs are shorter in length, less spliced and show less conservation than known genes. The expression of putative lncRNAs is in one tenth on average of known coding genes, while comparable with known lncRNAs. From chromatin data, putative embryonic brain lncRNAs are associated with active chromatin marks, comparable with known lncRNAs. Embryonic brain expressed lncRNAs are also indicated to have expression though not evident in adult brain. Gene Ontology analysis of putative embryonic brain lncRNAs suggests that they are associated with brain development. The putative lncRNAs are shown to be related to possible cis-regulatory roles in imprinting even themselves are deemed to be imprinted lncRNAs. Re-analysis of one knockdown data suggests that four regulators are associated with lncRNAs. Taken together, the identification and systematic analysis of putative lncRNAs would provide novel insights into uncharacterized mouse non-coding regions and the relationships with mammalian embryonic brain development. PMID:23967161

  8. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, Elia; Obabko, Aleks; Fischer, Paul

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  9. Large-scale large eddy simulation of nuclear reactor flows: Issues and perspectives

    DOE PAGES

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; ...

    2016-11-03

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems.more » These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. Finally, the focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.« less

  10. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoekstra, Robert J.; Hammond, Simon David; Richards, David

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  11. DCT based interpolation filter for motion compensation in HEVC

    NASA Astrophysics Data System (ADS)

    Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin

    2012-10-01

    High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.

  12. Student perception of travel service learning experience in Morocco.

    PubMed

    Puri, Aditi; Kaddoura, Mahmoud; Dominick, Christine

    2013-08-01

    This study explores the perceptions of health profession students participating in academic service learning in Morocco with respect to adapting health care practices to cultural diversity. Authors utilized semi-structured, open-ended interviews to explore the perceptions of health profession students. Nine dental hygiene and nursing students who traveled to Morocco to provide oral and general health services were interviewed. After interviews were recorded, they were transcribed verbatim to ascertain descriptive validity and to generate inductive and deductive codes that constitute the major themes of the data analysis. Thereafter, NVIVO 8 was used to rapidly determine the frequency of applied codes. The authors compared the codes and themes to establish interpretive validity. Codes and themes were initially determined independently by co-authors and applied to the data subsequently. The authors compared the applied codes to establish intra-rater reliability. International service learning experiences led to perceptions of growth as a health care provider among students. The application of knowledge and skills learned in academic programs and service learning settings were found to help in bridging the theory-practice gap. The specific experience enabled students to gain an understanding of diverse health care and cultural practices in Morocco. Students perceived that the experience gained in international service learning can heighten awareness of diverse cultural and health care practices to foster professional growth of health professionals.

  13. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  14. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  15. Telemetering and telecommunications research

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1991-01-01

    The research center activities during the reporting period have been focused in three areas: (1) developing the necessary equipment and test procedures to support the testing of 8PSK-TCM through TDRSS from the WSGT; (2) extending the theoretical decoder work to higher speeds with a design goal of 600MBPS at 2 bits/Hz; and (3) completing the initial phase of the CPFSK Multi-H research and determining what subsets (if any) of these coding schemes are useful in the TDRSS environment. The equipment for the WSGT TCM testing has been completed and is functioning in the lab at NMSU. Measured results to date indicate that the uncoded system with the modified HRD and NMSU symbol sync operates at 1 to 1.5 dB from theory when processing encoded 8PSK. The NMSU pragmatic decoder when combined with these units produces approximately 2.9 dB of coding gain at 10(exp -5) BER. Our study of CPFSK with Multi-H coding has reached a critical stage. The principal conclusions reached in this activity are: (1) no scheme using Multi-H alone investigated by us or found in the literature produces power/bandwidth trades that are as good as TCM with filtered 8PSK; (2) when Multi-H is combined with convolutional coding, one can obtain better coding gain than with Multi-H alone but still no better power/bandwidth performance than TCM and these gains are available only with complex receivers; (3) the only advantage we can find for the CPFSK schemes over filtered MPSK with TCM is that they are constant envelope (however, constant envelope is of no benefit in a multiple access channel and of questionable benefit in a single access channel since driving the TWT to saturation in this situation is generally acceptable); and (4) based upon these results the center's research program will focus on concluding the existing CPFSK studies.

  16. Direct Sequence Spread Spectrum (DSSS) Receiver, User Manual

    DTIC Science & Technology

    2008-01-01

    sampled data is clocked in to correlator data registers and a comparison is made between the code and data register contents, producing a correlation ...symbol (equal to the processing gain Gp ) but need not be otherwise synchronised with the spreading codes . This allows a very long and noise- like PRBS...and Q channels are independently but synchronously sampled . Complex Real ADC FIR Filter Interpolator Acquisition Correlators

  17. 26 CFR 1.61-11 - Pensions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... employee's contributions may constitute long-term capital gain, rather than ordinary income. (b) Cross... the Code and the regulations thereunder. Amounts received as pensions or annuities under the Social...

  18. 26 CFR 1.61-11 - Pensions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... employee's contributions may constitute long-term capital gain, rather than ordinary income. (b) Cross... the Code and the regulations thereunder. Amounts received as pensions or annuities under the Social...

  19. A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.

    1998-01-01

    Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.

  20. GAIN Technology Workshops Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braase, Lori Ann

    National and global demand for nuclear energy is increasing and United States (U.S.) global leadership is eroding. There is a sense of urgency with respect to the deployment of the innovative nuclear energy technologies. The Gateway for Accelerated Innovation in Nuclear (GAIN) initiative is based on the simultaneous achievement of three strategic goals. The first is maintaining global technology leadership within the U.S. Department of Energy (DOE). The second is enabling global industrial leadership for nuclear vendors and suppliers. The third is focused on utility optimization of nuclear energy within the clean energy portfolio. An effective public-private partnership is requiredmore » to achieve these goals. DOEs recognizes the recent sense of urgency new developers and investors have in getting their concepts to market. They know that time to market for nuclear technology takes too long and the facilities needed to conduct the necessary research, development and demonstration (RD&D) activities are very expensive to develop and maintain. Early technologies, in the lower technology readiness levels (TRL) need materials testing, analysis, modeling, code development, etc., most of which currently exists in the DOE national laboratory system. However, mature technologies typically need large component testing and demonstration facilities, which are expensive and long-lead efforts. By understanding the needs of advanced nuclear technology developers, GAIN will connect DOE national laboratory capabilities (e.g., facilities, expertise, materials, and data) with industry RD&D needs. In addition, GAIN is working with the Nuclear Regulatory Commission (NRC) to streamline processes and increase understanding of the licensing requirements for advanced reactors.« less

  1. Abundance and distribution of the highly iterated palindrome 1 (HIP1) among prokaryotes

    PubMed Central

    Moya, Andrés

    2011-01-01

    We have studied the abundance and phylogenetic distribution of the Highly Iterated Palindrome 1 (HIP1) among sequenced prokaryotic genomes. We show that an overrepresentation of HIP1 is exclusive of some lineages of cyanobacteria, and that this abundance was gained only once during evolution and was subsequently lost in the lineage leading to marine pico-cyanobacteria. We show that among cyanobacterial protein sequences with annotated Pfam domains, only OpcA (glucose 6-phosphate dehydrogenase assembly protein) has a phylogenetic distribution fully matching HIP1 abundance, suggesting a functional relationship; we also show that DAM methylase (an enzyme that has the four central nucleotides of HIP1 as is site of action) is present in all cyanobacterial genomes (independently of their HIP1 content) with the exception of marine pico-cyanobacteria whom might have lost this enzyme during the process of genome reduction. Our analyses also show that in some prokaryotic lineages (particularly in those species with large genomes), HIP1 is unevenly distributed between coding and non-coding DNA (being more common in coding regions; with the exception of Cyanobacteria Yellowstone B' and Synechococcus elongates where the reverse pattern is true). Finally, we explore the hypothesis that the HIP1 can be used as a molecular “water-mark” to identify horizontally transferred genes from cyanobacteria to other species. PMID:22312590

  2. Abundance and distribution of the highly iterated palindrome 1 (HIP1) among prokaryotes.

    PubMed

    Delaye, Luis; Moya, Andrés

    2011-09-01

    We have studied the abundance and phylogenetic distribution of the Highly Iterated Palindrome 1 (HIP1) among sequenced prokaryotic genomes. We show that an overrepresentation of HIP1 is exclusive of some lineages of cyanobacteria, and that this abundance was gained only once during evolution and was subsequently lost in the lineage leading to marine pico-cyanobacteria. We show that among cyanobacterial protein sequences with annotated Pfam domains, only OpcA (glucose 6-phosphate dehydrogenase assembly protein) has a phylogenetic distribution fully matching HIP1 abundance, suggesting a functional relationship; we also show that DAM methylase (an enzyme that has the four central nucleotides of HIP1 as is site of action) is present in all cyanobacterial genomes (independently of their HIP1 content) with the exception of marine pico-cyanobacteria whom might have lost this enzyme during the process of genome reduction. Our analyses also show that in some prokaryotic lineages (particularly in those species with large genomes), HIP1 is unevenly distributed between coding and non-coding DNA (being more common in coding regions; with the exception of Cyanobacteria Yellowstone B' and Synechococcus elongates where the reverse pattern is true). Finally, we explore the hypothesis that the HIP1 can be used as a molecular "water-mark" to identify horizontally transferred genes from cyanobacteria to other species.

  3. Problems with numerical techniques: Application to mid-loop operation transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryce, W.M.; Lillington, J.N.

    1997-07-01

    There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds ofmore » problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.« less

  4. Predicting Energy Consumption for Potential Effective Use in Hybrid Vehicle Powertrain Management Using Driver Prediction

    NASA Astrophysics Data System (ADS)

    Magnuson, Brian

    A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.

  5. Framing Innovation: Does an Instructional Vision Help Superintendents Gain Acceptance for a Large-Scale Technology Initiative?

    ERIC Educational Resources Information Center

    Flanagan, Gina E.

    2014-01-01

    There is limited research that outlines how a superintendent's instructional vision can help to gain acceptance of a large-scale technology initiative. This study explored how superintendents gain acceptance for a large-scale technology initiative (specifically a 1:1 device program) through various leadership actions. The role of the instructional…

  6. Numerical and Experimental Investigations of the Flow in a Stationary Pelton Bucket

    NASA Astrophysics Data System (ADS)

    Nakanishi, Yuji; Fujii, Tsuneaki; Kawaguchi, Sho

    A numerical code based on one of mesh-free particle methods, a Moving-Particle Semi-implicit (MPS) Method has been used for the simulation of free surface flows in a bucket of Pelton turbines so far. In this study, the flow in a stationary bucket is investigated by MPS simulation and experiment to validate the numerical code. The free surface flow dependent on the angular position of the bucket and the corresponding pressure distribution on the bucket computed by the numerical code are compared with that obtained experimentally. The comparison shows that numerical code based on MPS method is useful as a tool to gain an insight into the free surface flows in Pelton turbines.

  7. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  8. Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre

    2013-12-01

    In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.

  9. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  10. Fixed forced detection for fast SPECT Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.

    2018-03-01

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  11. Fixed forced detection for fast SPECT Monte-Carlo simulation.

    PubMed

    Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D

    2018-03-02

    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.

  12. Implementation of a partitioned algorithm for simulation of large CSI problems

    NASA Technical Reports Server (NTRS)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  13. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  14. Preparing beginning reading teachers: An experimental comparison of initial early literacy field experiences.

    PubMed

    Al Otaiba, Stephanie; Lake, Vickie E; Greulich, Luana; Folsom, Jessica S; Guidry, Lisa

    2012-01-01

    This randomized-control trial examined the learning of preservice teachers taking an initial Early Literacy course in an early childhood education program and of the kindergarten or first grade students they tutored in their field experience. Preservice teachers were randomly assigned to one of two tutoring programs: Book Buddies and Tutor Assisted Intensive Learning Strategies (TAILS), which provided identical meaning-focused instruction (shared book reading), but differed in the presentation of code-focused skills. TAILS used explicit, scripted lessons, and the Book Buddies required that code-focused instruction take place during shared book reading. Our research goal was to understand which tutoring program would be most effective in improving knowledge about reading, lead to broad and deep language and preparedness of the novice preservice teachers, and yield the most successful student reading outcomes. Findings indicate that all pre-service teachers demonstrated similar gains in knowledge, but preservice teachers in the TAILS program demonstrated broader and deeper application of knowledge and higher self-ratings of preparedness to teach reading. Students in both conditions made similar comprehension gains, but students tutored with TAILS showed significantly stronger decoding gains.

  15. Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.

    PubMed

    Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain

    2010-03-01

    Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (

  16. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  17. Information trade-offs for optical quantum communication.

    PubMed

    Wilde, Mark M; Hayden, Patrick; Guha, Saikat

    2012-04-06

    Recent work has precisely characterized the achievable trade-offs between three key information processing tasks-classical communication (generation or consumption), quantum communication (generation or consumption), and shared entanglement (distribution or consumption), measured in bits, qubits, and ebits per channel use, respectively. Slices and corner points of this three-dimensional region reduce to well-known protocols for quantum channels. A trade-off coding technique can attain any point in the region and can outperform time sharing between the best-known protocols for accomplishing each information processing task by itself. Previously, the benefits of trade-off coding that had been found were too small to be of practical value (viz., for the dephasing and the universal cloning machine channels). In this Letter, we demonstrate that the associated performance gains are in fact remarkably high for several physically relevant bosonic channels that model free-space or fiber-optic links, thermal-noise channels, and amplifiers. We show that significant performance gains from trade-off coding also apply when trading photon-number resources between transmitting public and private classical information simultaneously over secret-key-assisted bosonic channels. © 2012 American Physical Society

  18. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  19. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  20. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  1. Nanoscale Microelectronic Circuit Development

    DTIC Science & Technology

    2011-06-17

    structure to obtain a one-hot-encoded output instead of a thermometer code …………………………………………………………………………44 Figure 37. A folded ...thermometer code Figure 37. A folded PLINCO cell. The output of the PLINCO is 8-wide, but only the left half or right half is passed on. A carry...noise figure requirements are not stringent since the GPS signal is spread spectrum coded , providing over 40 dB of processing gain and easing the

  2. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  3. Sensor Authentication: Embedded Processor Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svoboda, John

    2012-09-25

    Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking

  4. Pickup ion acceleration in the successive appearance of corotating interaction regions

    NASA Astrophysics Data System (ADS)

    Tsubouchi, K.

    2017-04-01

    Acceleration of pickup ions (PUIs) in an environment surrounded by a pair of corotating interaction regions (CIRs) was investigated by numerical simulations using a hybrid code. Energetic particles associated with CIRs have been considered to be a result of the acceleration at their shock boundaries, but recent observations identified the ion flux peaks in the sub-MeV to MeV energy range in the rarefaction region, where two separate CIRs were likely connected by the magnetic field. Our simulation results confirmed these observational features. As the accelerated PUIs repeatedly bounce back and forth along the field lines between the reverse shock of the first CIR and the forward shock of the second one, the energetic population is accumulated in the rarefaction region. It was also verified that PUI acceleration in the dual CIR system had two different stages. First, because PUIs have large gyroradii, multiple shock crossing is possible for several tens of gyroperiods, and there is an energy gain in the component parallel to the magnetic field via shock drift acceleration. Second, as the field rarefaction evolves and the radial magnetic field becomes dominant, Fermi-type reflection takes place at the shock. The converging nature of two shocks results in a net energy gain. The PUI energy acquired through these processes is close to 0.5 MeV, which may be large enough for further acceleration, possibly resulting in the source of anomalous cosmic rays.

  5. Puzzles in modern biology. V. Why are genomes overwired?

    PubMed

    Frank, Steven A

    2017-01-01

    Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.

  6. Advanced Wireless Integrated Navy Network (AWINN)

    DTIC Science & Technology

    2005-12-31

    handle high data rates using COTS FPGAs . The effort of the Cross-Layer Optimization group is focused on cross-layer design of UWB for position location...From Transmitter Boar1 To Receiver BoardTransmittedl Receiver i i.. Switch Lowpass -20 dB FPGA -2dB Filter Gain Controlled Gain Variable Attenuator... FPGA Code * April - June 2006 "o Demonstrate Transceiver Operation "o Integrate Transceiver with Other AWINN Activities Personnel: Chris R. Anderson

  7. On Performance of Linear Multiuser Detectors for Wireless Multimedia Applications

    NASA Astrophysics Data System (ADS)

    Agarwal, Rekha; Reddy, B. V. R.; Bindu, E.; Nayak, Pinki

    In this paper, performance of different multi-rate schemes in DS-CDMA system is evaluated. The analysis of multirate linear multiuser detectors with multiprocessing gain is analyzed for synchronous Code Division Multiple Access (CDMA) systems. Variable data rate is achieved by varying the processing gain. Our conclusion is that bit error rate for multirate and single rate systems can be made same with a tradeoff with number of users in linear multiuser detectors.

  8. Performance Comparison between CDTD and STTD for DS-CDMA/MMSE-FDE with Frequency-Domain ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.

  9. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  10. Optical observables in stars with non-stationary atmospheres. [fireballs and cepheid models

    NASA Technical Reports Server (NTRS)

    Hillendahl, R. W.

    1980-01-01

    Experience gained by use of Cepheid modeling codes to predict the dimensional and photometric behavior of nuclear fireballs is used as a means of validating various computational techniques used in the Cepheid codes. Predicted results from Cepheid models are compared with observations of the continuum and lines in an effort to demonstrate that the atmospheric phenomena in Cepheids are quite complex but that they can be quantitatively modeled.

  11. Compressive Sensing for Radar and Radar Sensor Networks

    DTIC Science & Technology

    2013-12-02

    Zero Correlation Zone Sequence Pair Sets for MIMO Radar Inspired by recent advances in MIMO radar, we apply orthogonal phase coded waveforms to MIMO ...radar system in order to gain better range resolution and target direction finding performance [2]. We provide and investigate a generalized MIMO radar...ZCZ) sequence-Pair Set (ZCZPS). We also study the MIMO radar ambiguity function of the system using phase coded waveforms, based on which we analyze

  12. Chloroplast DNA Structural Variation, Phylogeny, and Age of Divergence among Diploid Cotton Species.

    PubMed

    Chen, Zhiwen; Feng, Kun; Grover, Corrinne E; Li, Pengbo; Liu, Fang; Wang, Yumei; Xu, Qin; Shang, Mingzhao; Zhou, Zhongli; Cai, Xiaoyan; Wang, Xingxing; Wendel, Jonathan F; Wang, Kunbo; Hua, Jinping

    2016-01-01

    The cotton genus (Gossypium spp.) contains 8 monophyletic diploid genome groups (A, B, C, D, E, F, G, K) and a single allotetraploid clade (AD). To gain insight into the phylogeny of Gossypium and molecular evolution of the chloroplast genome in this group, we performed a comparative analysis of 19 Gossypium chloroplast genomes, six reported here for the first time. Nucleotide distance in non-coding regions was about three times that of coding regions. As expected, distances were smaller within than among genome groups. Phylogenetic topologies based on nucleotide and indel data support for the resolution of the 8 genome groups into 6 clades. Phylogenetic analysis of indel distribution among the 19 genomes demonstrates contrasting evolutionary dynamics in different clades, with a parallel genome downsizing in two genome groups and a biased accumulation of insertions in the clade containing the cultivated cottons leading to large (for Gossypium) chloroplast genomes. Divergence time estimates derived from the cpDNA sequence suggest that the major diploid clades had diverged approximately 10 to 11 million years ago. The complete nucleotide sequences of 6 cpDNA genomes are provided, offering a resource for cytonuclear studies in Gossypium.

  13. Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.

    PubMed

    Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang

    2018-09-01

    Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.

  14. Chloroplast DNA Structural Variation, Phylogeny, and Age of Divergence among Diploid Cotton Species

    PubMed Central

    Li, Pengbo; Liu, Fang; Wang, Yumei; Xu, Qin; Shang, Mingzhao; Zhou, Zhongli; Cai, Xiaoyan; Wang, Xingxing; Wendel, Jonathan F.; Wang, Kunbo

    2016-01-01

    The cotton genus (Gossypium spp.) contains 8 monophyletic diploid genome groups (A, B, C, D, E, F, G, K) and a single allotetraploid clade (AD). To gain insight into the phylogeny of Gossypium and molecular evolution of the chloroplast genome in this group, we performed a comparative analysis of 19 Gossypium chloroplast genomes, six reported here for the first time. Nucleotide distance in non-coding regions was about three times that of coding regions. As expected, distances were smaller within than among genome groups. Phylogenetic topologies based on nucleotide and indel data support for the resolution of the 8 genome groups into 6 clades. Phylogenetic analysis of indel distribution among the 19 genomes demonstrates contrasting evolutionary dynamics in different clades, with a parallel genome downsizing in two genome groups and a biased accumulation of insertions in the clade containing the cultivated cottons leading to large (for Gossypium) chloroplast genomes. Divergence time estimates derived from the cpDNA sequence suggest that the major diploid clades had diverged approximately 10 to 11 million years ago. The complete nucleotide sequences of 6 cpDNA genomes are provided, offering a resource for cytonuclear studies in Gossypium. PMID:27309527

  15. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  16. Thin-layer and full Navier-Stokes calculations for turbulent supersonic flow over a cone at an angle of attack

    NASA Technical Reports Server (NTRS)

    Smith, Crawford F.; Podleski, Steve D.

    1993-01-01

    The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.

  17. Temporally evolving gain mechanisms of attention in macaque area V4.

    PubMed

    Sani, Ilaria; Santandrea, Elisa; Morrone, Maria Concetta; Chelazzi, Leonardo

    2017-08-01

    Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast, a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' contrast response functions (CRFs) in awake, behaving macaque monkeys and applied a new approach that emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain, strongly dependent on prestimulus activity changes (baseline shift); a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset; and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor. NEW & NOTEWORTHY We offer an innovative perspective on the interplay between attention and luminance contrast in macaque area V4, one in which time becomes a fundamental factor. We place emphasis on the temporal dynamics of attentional effects, pioneering the notion that attention modulates contrast response functions of V4 neurons via the sequential engagement of distinct gain mechanisms. These findings advance understanding of attentional influences on visual processing and help reconcile divergent results in the literature. Copyright © 2017 the American Physiological Society.

  18. Development and analysis of educational technologies for a blended organic chemistry course

    NASA Astrophysics Data System (ADS)

    Evans, Michael James

    Blended courses incorporate elements of both face-to-face and online instruction. The extent to which blended courses are conducted online, and the proper role of the online components of blended courses, have been debated and may vary. What can be said in general, however, is that online tools for blended courses are typically culled together from a variety of sources, are often very large scale, and may present distractions for students that decrease their utility as teaching tools. Furthermore, large-scale educational technologies may not be amenable to rigorous, detailed study, limiting evaluation of their effectiveness. Small-scale educational technologies run from the instructor's own server have the potential to mitigate many of these issues. Such tools give the instructor or researcher direct access to all available data, facilitating detailed analysis of student use. Code modification is simple and rapid if errors arise, since code is stored where the instructor can easily access it. Finally, the design of a small-scale tool can target a very specific application. With these ideas in mind, this work describes several projects aimed at exploring the use of small-scale, web-based software in a blended organic chemistry course. A number of activities were developed and evaluated using the Student Assessment of Learning Gains survey, and data from the activities were analyzed using quantitative methods of statistics and social network analysis methods. Findings from this work suggest that small-scale educational technologies provide significant learning benefits for students of organic chemistry---with the important caveat that instructors must offer appropriate levels of technical and pedagogical support for students. Most notably, students reported significant learning gains from activities that included collaborative learning supported by novel online tools. For the particular context of organic chemistry, which has a unique semantic language (Lewis structures), the incorporation of shared video was a novel but important element of these activities. In fields for which mere text would not provide enough information in communications between students, video offers an appealing medium for student-student interaction.

  19. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  20. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  1. A novel, implicit treatment for language comprehension processes in right hemisphere brain damage: Phase I data

    PubMed Central

    Tompkins, Connie A.; Blake, Margaret T.; Wambaugh, Julie; Meigh, Kimberly

    2012-01-01

    Background This manuscript reports the initial phase of testing for a novel, “Contextual constraint” treatment, designed to stimulate inefficient language comprehension processes in adults with right hemisphere brain damage (RHD). Two versions of treatment were developed to target two normal comprehension processes that have broad relevance for discourse comprehension and that are often disrupted by RHD: coarse semantic coding and suppression. The development of the treatment was informed by two well-documented strengths of the RHD population. The first is consistently better performance on assessments that are implicit, or nearly so, than on explicit, metalinguistic measures of language and cognitive processing. The second is improved performance when given linguistic context that moderately-to-strongly biases an intended meaning. Treatment consisted of providing brief context sentences to prestimulate, or constrain, intended interpretations. Participants made no explicit associations or judgments about the constraint sentences; rather, these contexts served only as implicit primes. Aims This Phase I treatment study aimed to determine the effects of a novel, implicit, Contextual Constraint treatment in adults with RHD whose coarse coding or suppression processes were inefficient. Treatment was hypothesized to speed coarse coding or suppression function in these individuals. Methods & Procedures Three adults with RHD participated in this study, one (P1) with a coarse coding deficit and two (P2, P3) with suppression deficits. Probe tasks were adapted from prior studies of coarse coding and suppression in RHD. The dependent measure was the percentage of responses that met predetermined response time criteria. When pre-treatment baseline performance was stable, treatment was initiated. There were two levels of contextual constraint, Strong and Moderate, and treatment for each item began with the provision of the Strong constraint context. Outcomes & Results Treatment-contingent gains were evident after brief periods of treatment, for P1 on two treatment lists, and for P2. P3 made slower but still substantial gains. Maintenance of gains was evident for P1, the only participant for whom it was measured. Conclusions This Phase I treatment study documents the potential for considerable gains from an implicit, Contextual constraint treatment. If replicated, this approach to treatment may hold promise for individuals who do poorly with effortful, metalinguistic treatment tasks, or for whom it is desirable to minimize errors during treatment. The real test of this treatment’s benefit will come from later phase studies of study, which will test broad-based generalization to various aspects of discourse comprehension. PMID:22368317

  2. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Novel high-gain, improved-bandwidth, finned-ladder V-band Traveling-Wave Tube slow-wave circuit design

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Wilson, Jeffrey D.

    1994-01-01

    The V-band frequency range of 59-64 GHz is a region of the millimeter-wave spectrum that has been designated for inter-satellite communications. As a first effort to develop a high-efficiency V-band Traveling-Wave Tube (TWT), variations on a ring-plane slow-wave circuit were computationally investigated to develop an alternative to the more conventional ferruled coupled-cavity circuit. The ring-plane circuit was chosen because of its high interaction impedance, large beam aperture, and excellent thermal dissipation properties. Despite these advantages, however, low bandwidth and high voltage requirements have, until now, prevented its acceptance outside the laboratory. In this paper, the three-dimensional electrodynamic simulation code MAFIA (solution of MAxwell's Equation by the Finite-Integration-Algorithm) is used to investigate methods of increasing the bandwidth and lowering the operating voltage of the ring-plane circuit. Calculations of frequency-phase dispersion, beam on-axis interaction impedance, attenuation and small-signal gain per wavelength were performed for various geometric variations and loading distributions of the ring-plane TWT slow-wave circuit. Based on the results of the variations, a circuit termed the finned-ladder TWT slow-wave circuit was designed and is compared here to the scaled prototype ring-plane and a conventional ferruled coupled-cavity TWT circuit over the V-band frequency range. The simulation results indicate that this circuit has a much higher gain, significantly wider bandwidth, and a much lower voltage requirement than the scaled ring-plane prototype circuit, while retaining its excellent thermal dissipation properties. The finned-ladder circuit has a much larger small-signal gain per wavelength than the ferruled coupled-cavity circuit, but with a moderate sacrifice in bandwidth.

  4. A Catalogue of Putative cis-Regulatory Interactions Between Long Non-coding RNAs and Proximal Coding Genes Based on Correlative Analysis Across Diverse Human Tumors.

    PubMed

    Basu, Swaraj; Larsson, Erik

    2018-05-31

    Antisense transcripts and other long non-coding RNAs are pervasive in mammalian cells, and some of these molecules have been proposed to regulate proximal protein-coding genes in cis For example, non-coding transcription can contribute to inactivation of tumor suppressor genes in cancer, and antisense transcripts have been implicated in the epigenetic inactivation of imprinted genes. However, our knowledge is still limited and more such regulatory interactions likely await discovery. Here, we make use of available gene expression data from a large compendium of human tumors to generate hypotheses regarding non-coding-to-coding cis -regulatory relationships with emphasis on negative associations, as these are less likely to arise for reasons other than cis -regulation. We document a large number of possible regulatory interactions, including 193 coding/non-coding pairs that show expression patterns compatible with negative cis -regulation. Importantly, by this approach we capture several known cases, and many of the involved coding genes have known roles in cancer. Our study provides a large catalog of putative non-coding/coding cis -regulatory pairs that may serve as a basis for further experimental validation and characterization. Copyright © 2018 Basu and Larsson.

  5. Post-licensure rapid immunization safety monitoring program (PRISM) data characterization.

    PubMed

    Baker, Meghan A; Nguyen, Michael; Cole, David V; Lee, Grace M; Lieu, Tracy A

    2013-12-30

    The Post-Licensure Rapid Immunization Safety Monitoring (PRISM) program is the immunization safety monitoring component of FDA's Mini-Sentinel project, a program to actively monitor the safety of medical products using electronic health information. FDA sought to assess the surveillance capabilities of this large claims-based distributed database for vaccine safety surveillance by characterizing the underlying data. We characterized data available on vaccine exposures in PRISM, estimated how much additional data was gained by matching with select state and local immunization registries, and compared vaccination coverage estimates based on PRISM data with other available data sources. We generated rates of computerized codes representing potential health outcomes relevant to vaccine safety monitoring. Standardized algorithms including ICD-9 codes, number of codes required, exclusion criteria and location of the encounter were used to obtain the background rates. The majority of the vaccines routinely administered to infants, children, adolescents and adults were well captured by claims data. Immunization registry data in up to seven states comprised between 5% and 9% of data for all vaccine categories with the exception of 10% for hepatitis B and 3% and 4% for rotavirus and zoster respectively. Vaccination coverage estimates based on PRISM's computerized data were similar to but lower than coverage estimates from the National Immunization Survey and Healthcare Effectiveness Data and Information Set. For the 25 health outcomes of interest studied, the rates of potential outcomes based on ICD-9 codes were generally higher than rates described in the literature, which are typically clinically confirmed cases. PRISM program's data on vaccine exposures and health outcomes appear complete enough to support robust safety monitoring. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Comprehensive analysis of coding-lncRNA gene co-expression network uncovers conserved functional lncRNAs in zebrafish.

    PubMed

    Chen, Wen; Zhang, Xuan; Li, Jing; Huang, Shulan; Xiang, Shuanglin; Hu, Xiang; Liu, Changning

    2018-05-09

    Zebrafish is a full-developed model system for studying development processes and human disease. Recent studies of deep sequencing had discovered a large number of long non-coding RNAs (lncRNAs) in zebrafish. However, only few of them had been functionally characterized. Therefore, how to take advantage of the mature zebrafish system to deeply investigate the lncRNAs' function and conservation is really intriguing. We systematically collected and analyzed a series of zebrafish RNA-seq data, then combined them with resources from known database and literatures. As a result, we obtained by far the most complete dataset of zebrafish lncRNAs, containing 13,604 lncRNA genes (21,128 transcripts) in total. Based on that, a co-expression network upon zebrafish coding and lncRNA genes was constructed and analyzed, and used to predict the Gene Ontology (GO) and the KEGG annotation of lncRNA. Meanwhile, we made a conservation analysis on zebrafish lncRNA, identifying 1828 conserved zebrafish lncRNA genes (1890 transcripts) that have their putative mammalian orthologs. We also found that zebrafish lncRNAs play important roles in regulation of the development and function of nervous system; these conserved lncRNAs present a significant sequential and functional conservation, with their mammalian counterparts. By integrative data analysis and construction of coding-lncRNA gene co-expression network, we gained the most comprehensive dataset of zebrafish lncRNAs up to present, as well as their systematic annotations and comprehensive analyses on function and conservation. Our study provides a reliable zebrafish-based platform to deeply explore lncRNA function and mechanism, as well as the lncRNA commonality between zebrafish and human.

  7. Calculating the Effect of External Shading on the Solar Heat Gain Coefficient of Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohler, Christian; Shukla, Yash; Rawal, Rajan

    Current prescriptive building codes have limited ways to account for the effect of solar shading, such as overhangs and awnings, on window solar heat gains. We propose two new indicators, the adjusted Solar Heat Gain Coefficient (aSHGC) which accounts for external shading while calculating the SHGC of a window, and a weighted SHGC (SHGCw) which provides a seasonal SHGC weighted by solar intensity. We demonstrate a method to calculate these indices using existing tools combined with additional calculations. The method is demonstrated by calculating the effect of an awning on a clear double glazing in New Delhi.

  8. Dynamic Fracture Simulations of Explosively Loaded Cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Carly W.; Goto, D. M.

    2015-11-30

    This report documents the modeling results of high explosive experiments investigating dynamic fracture of steel (AerMet® 100 alloy) cylinders. The experiments were conducted at Lawrence Livermore National Laboratory (LLNL) during 2007 to 2008 [10]. A principal objective of this study was to gain an understanding of dynamic material failure through the analysis of hydrodynamic computer code simulations. Two-dimensional and three-dimensional computational cylinder models were analyzed using the ALE3D multi-physics computer code.

  9. Energy-based aeroelastic analysis of a morphing wing

    NASA Astrophysics Data System (ADS)

    De Breuker, Roeland; Abdalla, Mostafa; Gürdal, Zafer; Lindner, Douglas

    2007-04-01

    Aircraft are often confronted with distinct circumstances during different parts of their mission. Ideally the aircraft should fly optimally in terms of aerodynamic performance and other criteria in each one of these mission requirements. This requires in principle as many different aircraft configurations as there are flight conditions, so therefore a morphing aircraft would be the ideal solution. A morphing aircraft is a flying vehicle that i) changes its state substantially, ii) provides superior system capability and iii) uses a design that integrates innovative technologies. It is important for such aircraft that the gains due to the adaptability to the flight condition are not nullified by the energy consumption to carry out the morphing manoeuvre. Therefore an aeroelastic numerical tool that takes into account the morphing energy is needed to analyse the net gain of the morphing. The code couples three-dimensional beam finite elements model in a co-rotational framework to a lifting-line aerodynamic code. The morphing energy is calculated by summing actuation moments, applied at the beam nodes, multiplied by the required angular rotations of the beam elements. The code is validated with NASTRAN Aeroelasticity Module and found to be in agreement. Finally the applicability of the code is tested for a sweep morphing manoeuvre and it has been demonstrated that sweep morphing can improve the aerodynamic performance of an aircraft and that the inclusion of aeroelastic effects is important.

  10. Differential receptive field organizations give rise to nearly identical neural correlations across three parallel sensory maps in weakly electric fish

    PubMed Central

    2017-01-01

    Understanding how neural populations encode sensory information thereby leading to perception and behavior (i.e., the neural code) remains an important problem in neuroscience. When investigating the neural code, one must take into account the fact that neural activities are not independent but are actually correlated with one another. Such correlations are seen ubiquitously and have a strong impact on neural coding. Here we investigated how differences in the antagonistic center-surround receptive field (RF) organization across three parallel sensory maps influence correlations between the activities of electrosensory pyramidal neurons. Using a model based on known anatomical differences in receptive field center size and overlap, we initially predicted large differences in correlated activity across the maps. However, in vivo electrophysiological recordings showed that, contrary to modeling predictions, electrosensory pyramidal neurons across all three segments displayed nearly identical correlations. To explain this surprising result, we incorporated the effects of RF surround in our model. By systematically varying both the RF surround gain and size relative to that of the RF center, we found that multiple RF structures gave rise to similar levels of correlation. In particular, incorporating known physiological differences in RF structure between the three maps in our model gave rise to similar levels of correlation. Our results show that RF center overlap alone does not determine correlations which has important implications for understanding how RF structure influences correlated neural activity. PMID:28863136

  11. Many human accelerated regions are developmental enhancers

    PubMed Central

    Capra, John A.; Erwin, Genevieve D.; McKinsey, Gabriel; Rubenstein, John L. R.; Pollard, Katherine S.

    2013-01-01

    The genetic changes underlying the dramatic differences in form and function between humans and other primates are largely unknown, although it is clear that gene regulatory changes play an important role. To identify regulatory sequences with potentially human-specific functions, we and others used comparative genomics to find non-coding regions conserved across mammals that have acquired many sequence changes in humans since divergence from chimpanzees. These regions are good candidates for performing human-specific regulatory functions. Here, we analysed the DNA sequence, evolutionary history, histone modifications, chromatin state and transcription factor (TF) binding sites of a combined set of 2649 non-coding human accelerated regions (ncHARs) and predicted that at least 30% of them function as developmental enhancers. We prioritized the predicted ncHAR enhancers using analysis of TF binding site gain and loss, along with the functional annotations and expression patterns of nearby genes. We then tested both the human and chimpanzee sequence for 29 ncHARs in transgenic mice, and found 24 novel developmental enhancers active in both species, 17 of which had very consistent patterns of activity in specific embryonic tissues. Of these ncHAR enhancers, five drove expression patterns suggestive of different activity for the human and chimpanzee sequence at embryonic day 11.5. The changes to human non-coding DNA in these ncHAR enhancers may modify the complex patterns of gene expression necessary for proper development in a human-specific manner and are thus promising candidates for understanding the genetic basis of human-specific biology. PMID:24218637

  12. A new VLSI architecture for a single-chip-type Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.

    1989-01-01

    A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.

  13. Intrinsic dimensionality predicts the saliency of natural dynamic scenes.

    PubMed

    Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt

    2012-06-01

    Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.

  14. Multifunction audio digitizer for communications systems

    NASA Technical Reports Server (NTRS)

    Monford, L. G., Jr.

    1971-01-01

    Digitizer accomplishes both N bit pulse code modulation /PCM/ and delta modulation, and provides modulation indicating variable signal gain and variable sidetone. Other features include - low package count, variable clock rate to optimize bandwidth, and easily expanded PCM output.

  15. 77 FR 38142 - Proposed Collection; Comment Request for Form 6252

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-26

    ...: Internal Revenue Code section 453 provides that if real or personal property is disposed of at a gain and... information is necessary for the proper performance of the functions of the agency, including whether the...

  16. Quantifying consumption rates of dissolved oxygen along bed forms

    NASA Astrophysics Data System (ADS)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2016-04-01

    Streambed interfaces represent hotspots for nutrient transformations because they host different microbial species, and the evaluation of these reaction rates is important to assess the fate of nutrients in riverine environments. In this work we analyze a series of flume experiments on oxygen demand in dune-shaped hyporheic sediments under losing and gaining flow conditions. We employ a new modeling code to quantify oxygen consumption rates from observed vertical profiles of oxygen concentration. The code accounts for transport by molecular diffusion and water advection, and automatically determines the reaction rates that provide the best fit between observed and modeled concentration values. The results show that reaction rates are not uniformly distributed across the streambed, in agreement with the expected behavior predicted by hyporheic exchange theory. Oxygen consumption was found to be highly influenced by the presence of gaining or losing flow conditions, which controlled the delivery of labile DOC to streambed microorganisms.

  17. Identification of Trends into Dose Calculations for Astronauts through Performing Sensitivity Analysis on Calculational Models Used by the Radiation Health Office

    NASA Technical Reports Server (NTRS)

    Adams, Thomas; VanBaalen, Mary

    2009-01-01

    The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.

  18. [ENT and head and neck surgery in the German DRG system 2007].

    PubMed

    Franz, D; Roeder, N; Hörmann, K; Alberty, J

    2007-07-01

    The German DRG system has been further developed into version 2007. For ENT and head and neck surgery, significant changes in the coding of diagnoses and medical operations as well as in the the DRG structure have been made. New ICD codes for sleep apnoea and acquired tracheal stenosis have been implemented. Surgery on the acoustic meatus, removal of auricle hyaline cartilage for transplantation (e. g. rhinosurgery) and tonsillotomy have been coded in the 2007 version. In addition, the DRG structure has been improved. Case allocation of more than one significant operation has been established. The G-DRG system has gained in complexity. High demands are made on the coding of complex cases, whereas standard cases require mostly only one specific diagnosis and one specific OPS code. The quality of case allocation for ENT patients within the G-DRG system has been improved. Nevertheless, further adjustments of the G-DRG system are necessary.

  19. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  20. Large-Signal Code TESLA: Current Status and Recent Development

    DTIC Science & Technology

    2008-04-01

    K.Eppley, J.J.Petillo, “ High - power four cavity S - band multiple- beam klystron design”, IEEE Trans. Plasma Sci. , vol. 32, pp. 1119-1135, June 2004. 4...advances in the development of the large-signal code TESLA, mainly used for the modeling of high - power single- beam and multiple-beam klystron ...amplifiers. Keywords: large-signal code; multiple-beam klystrons ; serial and parallel versions. Introduction The optimization and design of new high power

  1. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  2. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  3. A user's manual for the Electromagnetic Surface Patch code: ESP version 3

    NASA Technical Reports Server (NTRS)

    Newman, E. H.; Dilsavor, R. L.

    1987-01-01

    This report serves as a user's manual for Version III of the Electromagnetic Surface Patch Code or ESP code. ESP is user-oriented, based on the method of moments (MM) for treating geometries consisting of an interconnection of thin wires and perfectly conducting polygonal plates. Wire/plate junctions must be about 0.1 lambda or more from any plate edge. Several plates may intersect along a common edge. Excitation may be by either a delta-gap voltage generator or by a plane wave. The thin wires may have finite conductivity and also may contain lumped loads. The code computes most of the usual quantities of interest such as current distribution, input impedance, radiation efficiency, mutual coupling, far zone gain patterns (both polarizations) and radar-cross-section (both/cross polarizations).

  4. Convolutional coding at 50 Mbps for the Shuttle Ku-band return link

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.

    1976-01-01

    Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.

  5. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  6. Organic field effect transistor with ultra high amplification

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio

    2016-09-01

    High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.

  7. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  8. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  9. Signatures of criticality arise from random subsampling in simple population models.

    PubMed

    Nonnenmacher, Marcel; Behrens, Christian; Berens, Philipp; Bethge, Matthias; Macke, Jakob H

    2017-10-01

    The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.

  10. Demonstration of Vibrational Braille Code Display Using Large Displacement Micro-Electro-Mechanical Systems Actuators

    NASA Astrophysics Data System (ADS)

    Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa

    2012-06-01

    In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.

  11. Study of a co-designed decision feedback equalizer, deinterleaver, and decoder

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.; Welch, Loyd

    1990-01-01

    A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.

  12. Automated Shirt Collar Manufacturing. Volume 3. Sewing Head Control for High Speed Stitch Contour Tracking.

    DTIC Science & Technology

    1993-06-23

    mal control scheme sums the cost function for all data points from time zero to infinity; however, the preview case sums only through the preview step...shaft speed that is generated by the monitor port on the servo amplifiers. Therefore, the zero frequency gain shown in the figure contains the gain...Delivery Order 0014 SAOORESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS Rom415CmrnSainPROGRAM IPROJECT TASK WORK UNITAlexandriaR VA 22304-6100

  13. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  14. Hybrid spread spectrum radio system

    DOEpatents

    Smith, Stephen F.; Dress, William B.

    2010-02-02

    Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.

  15. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  16. IEEE International Symposium on Information Theory (ISIT): Abstracts of Papers, Held in Ann Arbor, Michigan on 6-9 October 1986.

    DTIC Science & Technology

    1986-10-01

    BUZO, and FEDERICO KUHLMANN, Universidad Nacional Autdnoma de Mixico, Facultad de Ingenieria , Divisidn Estudios de Posgrado, P.O. Box 70-256, 04510...unsuccess- ful in this area for a long time. It was felt, e.g., in the voiceband modem industry , that the coding gains achievable by error-correction coding...without bandwidth expansion or data rate reduction, when compared to uncoded modulation. The concept was quickly adopted by industry , and is now becoming

  17. Information Environments

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia

    2003-01-01

    The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).

  18. Critical evaluation of reverse engineering tool Imagix 4D!

    PubMed

    Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay

    2016-01-01

    The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.

  19. Cross-cultural comparison of perspectives on healthy eating among Chinese and American undergraduate students.

    PubMed

    Banna, Jinan C; Gilliland, Betsy; Keefe, Margaret; Zheng, Dongping

    2016-09-26

    Understanding views about what constitutes a healthy diet in diverse populations may inform design of culturally tailored behavior change interventions. The objective of this study was to describe perspectives on healthy eating among Chinese and American young adults and identify similarities and differences between these groups. Chinese (n = 55) and American (n = 57) undergraduate students in Changsha, Hunan, China and Honolulu, Hawai'i, U.S.A. composed one- to two-paragraph responses to the following prompt: "What does the phrase 'a healthy diet' mean to you?" Researchers used content analysis to identify predominant themes using Dedoose (version 5.2.0, SocioCultural Research Consultants, LLC, Los Angeles, CA, 2015). Three researchers independently coded essays and grouped codes with similar content. The team then identified themes and sorted them in discussion. Two researchers then deductively coded the entire data set using eight codes developed from the initial coding and calculated total code counts for each group of participants. Chinese students mentioned physical outcomes, such as maintaining immunity and digestive health. Timing of eating, with regular meals and greater intake during day than night, was emphasized. American students described balancing among food groups and balancing consumption with exercise, with physical activity considered essential. Students also stated that food components such as sugar, salt and fat should be avoided in large quantities. Similarities included principles such as moderation and fruits and vegetables as nutritious, and differences included foods to be restricted and meal timing. While both groups emphasized specific foods and guiding dietary principles, several distinctions in viewpoints emerged. The diverse views may reflect food-related messages to which participants are exposed both through the media and educational systems in their respective countries. Future studies may further examine themes that may not typically be addressed in nutrition education programs in diverse populations of young adults. Gaining greater knowledge of the ways in which healthy eating is viewed will allow for development of interventions that are sensitive to the traditional values and predominant views of health in various groups.

  20. Numerical simulations of gas mixing effect in electron cyclotron resonance ion sources

    NASA Astrophysics Data System (ADS)

    Mironov, V.; Bogomolov, S.; Bondarchenko, A.; Efremov, A.; Loginov, V.

    2017-01-01

    The particle-in-cell Monte Carlo collisions code nam-ecris is used to simulate the electron cyclotron resonance ion source (ECRIS) plasma sustained in a mixture of Kr with O2 , N2 , Ar, Ne, and He. The model assumes that ions are electrostatically confined in the ECR zone by a dip in the plasma potential. A gain in the extracted krypton ion currents is seen for the highest charge states; the gain is maximized when oxygen is used as a mixing gas. The special feature of oxygen is that most of the singly charged oxygen ions are produced after the dissociative ionization of oxygen molecules with a large kinetic energy release of around 5 eV per ion. The increased loss rate of energetic lowly charged ions of the mixing element requires a building up of the retarding potential barrier close to the ECR surface to equilibrate electron and ion losses out of the plasma. In the mixed plasmas, the barrier value is large (˜1 V ) compared to pure Kr plasma (˜0.01 V ), with longer confinement times of krypton ions and with much higher ion temperatures. The temperature of the krypton ions is increased because of extra heating by the energetic oxygen ions and a longer time of ion confinement. In calculations, a drop of the highly charged ion currents of lighter elements is observed when adding small fluxes of krypton into the source. This drop is caused by the accumulation of the krypton ions inside plasma, which decreases the electron and ion confinement times.

  1. Exploratory investigation of obesity risk and prevention in Chinese Americans.

    PubMed

    Liou, Doreen; Bauer, Kathleen D

    2007-01-01

    To examine the beliefs and attitudes related to obesity risk and its prevention in Chinese Americans via in-depth, qualitative interviews using the guiding tenets of Health Belief Model, Theory of Planned Behavior, and social ecological models. A qualitative study using tenets of the Health Belief Model, the Theory of Planned Behavior, and social ecological models. The New York City metropolitan area. Forty young Chinese American adults (24 females; 16 males) were interviewed. Obesity risk and prevention. Common themes were identified, coded, and compared using NVivo computer software. Poor dietary habits and sedentary lifestyles were seen as major weight gain contributors. Obesity was seen predominantly as a non-Asian phenomenon, although 60% of the participants felt susceptible to obesity. Physical and social environmental factors were the overriding themes generated as to the causes of weight gain among young adult Chinese Americans. Physical factors included the powerful effect of media-generated advertisements and a plethora of inexpensive fast and convenience foods emphasizing large portion sizes of low nutrient density. The social environment encourages the consumption of large quantities of these foods. Traditional Chinese cuisine was seen as providing more healthful alternatives, but increasing acculturation to American lifestyle results in less traditional food consumption. Some traditional Chinese beliefs regarding the desirability of a slightly heavy physique can encourage overeating. Nutrition educators need to be public policy advocates for environments providing tasty, low cost, healthful foods. Young adult Chinese Americans seek knowledge and skills for making convenient healthful food selections in the midst of a culture that advocates and provides an abundance of unhealthy choices.

  2. The ethics of biosafety considerations in gain-of-function research resulting in the creation of potential pandemic pathogens.

    PubMed

    Evans, Nicholas Greig; Lipsitch, Marc; Levinson, Meira

    2015-11-01

    This paper proposes an ethical framework for evaluating biosafety risks of gain-of-function (GOF) experiments that create novel strains of influenza expected to be virulent and transmissible in humans, so-called potential pandemic pathogens (PPPs). Such research raises ethical concerns because of the risk that accidental release from a laboratory could lead to extensive or even global spread of a virulent pathogen. Biomedical research ethics has focused largely on human subjects research, while biosafety concerns about accidental infections, seen largely as a problem of occupational health, have been ignored. GOF/PPP research is an example of a small but important class of research where biosafety risks threaten public health, well beyond the small number of persons conducting the research.We argue that bioethical principles that ordinarily apply only to human subjects research should also apply to research that threatens public health, even if, as in GOF/PPP studies, the research involves no human subjects. Specifically we highlight the Nuremberg Code's requirements of 'fruitful results for the good of society, unprocurable by other methods', and proportionality of risk and humanitarian benefit, as broad ethical principles that recur in later documents on research ethics and should also apply to certain types of research not involving human subjects. We address several potential objections to this view, and conclude with recommendations for bringing these ethical considerations into policy development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  4. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  5. Simultaneous Laser Ranging and Communication from an Earth-Based Satellite Laser Ranging Station to the Lunar Reconnaissance Orbiter in Lunar Orbit

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Skillman, David R.; Hoffman, Evan D.; Mao, Dandan; McGarry, Jan F.; Neumann, Gregory A.; McIntire, Leva; Zellar, Ronald S.; Davidson, Frederic M.; Fong, Wai H.; hide

    2013-01-01

    We report a free space laser communication experiment from the satellite laser ranging (SLR) station at NASA Goddard Space Flight Center (GSFC) to the Lunar Reconnaissance Orbiter (LRO) in lunar orbit through the on board one-way Laser Ranging (LR) receiver. Pseudo random data and sample image files were transmitted to LRO using a 4096-ary pulse position modulation (PPM) signal format. Reed-Solomon forward error correction codes were used to achieve error free data transmission at a moderate coding overhead rate. The signal fading due to the atmosphere effect was measured and the coding gain could be estimated.

  6. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  7. Improvements in the MGA Code Provide Flexibility and Better Error Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhter, W D; Kerr, J

    2005-05-26

    The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less

  8. Amplifying modeling for broad bandwidth pulse in Nd:glass based on hybrid-broaden mechanism

    NASA Astrophysics Data System (ADS)

    Su, J.; Liu, L.; Luo, B.; Wang, W.; Jing, F.; Wei, X.; Zhang, X.

    2008-05-01

    In this paper, the cross relaxation time is proposed to combine the homogeneous and inhomogeneous broaden mechanism for broad bandwidth pulse amplification model. The corresponding velocity equation, which can describe the response of inverse population on upper and low energy level of gain media to different frequency of pulse, is also put forward. The gain saturation and energy relaxation effect are also included in the velocity equation. Code named CPAP has been developed to simulate the amplifying process of broad bandwidth pulse in multi-pass laser system. The amplifying capability of multi-pass laser system is evaluated and gain narrowing and temporal shape distortion are also investigated when bandwidth of pulse and cross relaxation time of gain media are different. Results can benefit the design of high-energy PW laser system in LFRC, CAEP.

  9. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  10. Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model

    PubMed Central

    Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.

    2013-01-01

    Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967

  11. The Proteins API: accessing key integrated protein and genome information

    PubMed Central

    Antunes, Ricardo; Alpi, Emanuele; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd

    2017-01-01

    Abstract The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to ‘talk’ to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). PMID:28383659

  12. Practices and Processes of Leading High Performance Home Builders in the Upper Midwest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Thoma, E.; Ojczyk, C.

    2012-12-01

    The NorthernSTAR Building America Partnership team proposed this study to gain insight into the business, sales, and construction processes of successful high performance builders. The knowledge gained by understanding the high performance strategies used by individual builders, as well as the process each followed to move from traditional builder to high performance builder, will be beneficial in proposing more in-depth research to yield specific action items to assist the industry at large transform to high performance new home construction. This investigation identified the best practices of three successful high performance builders in the upper Midwest. In-depth field analysis of themore » performance levels of their homes, their business models, and their strategies for market acceptance were explored. All three builders commonly seek ENERGY STAR certification on their homes and implement strategies that would allow them to meet the requirements for the Building America Builders Challenge program. Their desire for continuous improvement, willingness to seek outside assistance, and ambition to be leaders in their field are common themes. Problem solving to overcome challenges was accepted as part of doing business. It was concluded that crossing the gap from code-based building to high performance based building was a natural evolution for these leading builders.« less

  13. Latitudinally dependent Trimpi effects: Modeling and observations

    NASA Astrophysics Data System (ADS)

    Clilverd, Mark A.; Yeo, Richard F.; Nunn, David; Smith, Andy J.

    1999-09-01

    Modeling studies show that the exclusion of the propagating VLF wave from the ionospheric region results in the decline of Trimpi magnitude with patch altitude. In large models such as Long Wave Propagation Capability (LWPC) this exclusion does not occur inherently in the code, and high-altitude precipitation modeling can produce results that are not consistent with observations from ground-based experiments. The introduction to LWPC of realistic wave attenuation of the height gain functions in the ionosphere solves these computational problems. This work presents the first modeling of (Born) Trimpi scattering at long ranges, taking into account global inhomogeneities and continuous mode conversion along all paths, by employing the full conductivity perturbation matrix. The application of the more realistic height gain functions allows the prediction of decreasing Trimpi activity with increasing latitude, primarily through the mechanism of excluding the VLF wave from regions of high conductivity and scattering efficiency. Ground-based observations from Faraday and Rothera, Antarctica, in September and October 1995 of Trimpi occurring on the NPM (Hawaii) path provide data that are consistent with these predictions. Latitudinal variations in Trimpi occurrence near L=2.5, with a significant decrease of about 70% occurrence between L=2.4 and L=2.8, have been observed at higher L shell resolution than in previous studies (i.e., 2

  14. High School Science Teachers' Perceptions of Teaching Content-Related Reading Comprehension Instruction

    NASA Astrophysics Data System (ADS)

    Williams, Theresa

    In order to achieve academic success, students must be able to comprehend written material in content-area textbooks. However, a large number of high school students struggle to comprehend science content. Research findings have demonstrated that students make measurable gains in comprehending content-area textbooks when provided quality reading comprehension instruction. The purpose of this study was to gain an understanding of how high school science teachers perceived their responsibility to provide content-related comprehension instruction and 10 high school science teachers were interviewed for this study. Data analysis consisted of open, axial, and selective coding. The findings revealed that 8 out of the 10 participants believed that it is their responsibility to provide reading comprehension. However, the findings also revealed that the participants provided varying levels of reading comprehension instruction as an integral part of their science instruction. The potential for positive social change could be achieved by teachers and administrators. Teachers may use the findings to reflect upon their own personal feelings and beliefs about providing explicit reading comprehension. In addition to teachers' commitment to reading comprehension instruction, administrators could deliberate about professional development opportunities that might improve necessary skills, eventually leading to better comprehension skills for students and success in their education.

  15. The Proteins API: accessing key integrated protein and genome information.

    PubMed

    Nightingale, Andrew; Antunes, Ricardo; Alpi, Emanuele; Bursteinas, Borisas; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd; Martin, Maria

    2017-07-03

    The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to 'talk' to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Increasing Inequality in Physical Activity Among Minnesota Secondary Schools, 2001-2010.

    PubMed

    Nelson, Toben F; MacLehose, Richard F; Davey, Cynthia; Rode, Peter; Nanney, Marilyn S

    2018-05-01

    Two Healthy People 2020 goals are to increase physical activity (PA) and to reduce disparities in PA. We explored whether PA at the school level changed over time in Minnesota schools and whether differences existed by demographic and socioeconomic factors. We examine self-reported PA (n = 276,089 students; N = 276 schools) for 2001-2010 from the Minnesota Student Survey linked to school demographic data from the National Center for Education Statistics and the Rural-Urban Commuting Area Codes. We conducted analyses at the school level using multivariable linear regression with cluster-robust recommendation errors. Overall, students who met PA recommendations increased from 59.8% in 2001 to 66.3% in 2010 (P < .001). Large gains in PA occurred at schools with fewer racial/ethnic minority students (0%-60.1% in 2001 to 67.5% in 2010, P < .001), whereas gains in PA were comparatively small at schools with a high proportion of racial/ethnic minority students in 2001 (30%-59.2% in 2001 to 62.7% in 2010). We found increasing inequalities in school-level PA by racial/ethnic characteristics of their schools and communities among secondary school students. Future research should monitor patterns of PA over time and explore mechanisms for patterns of inequality.

  17. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  18. Operational Tsunami Modelling with TsunAWI for the German-Indonesian Tsunami Early Warning System: Recent Developments

    NASA Astrophysics Data System (ADS)

    Rakowsky, N.; Harig, S.; Androsov, A.; Fuchs, A.; Immerz, A.; Schröter, J.; Hiller, W.

    2012-04-01

    Starting in 2005, the GITEWS project (German-Indonesian Tsunami Early Warning System) established from scratch a fully operational tsunami warning system at BMKG in Jakarta. Numerical simulations of prototypic tsunami scenarios play a decisive role in a priori risk assessment for coastal regions and in the early warning process itself. Repositories with currently 3470 regional tsunami scenarios for GITEWS and 1780 Indian Ocean wide scenarios in support of Indonesia as a Regional Tsunami Service Provider (RTSP) were computed with the non-linear shallow water modell TsunAWI. It is based on a finite element discretisation, employs unstructured grids with high resolution along the coast and includes inundation. This contribution gives an overview on the model itself, the enhancement of the model physics, and the experiences gained during the process of establishing an operational code suited for thousands of model runs. Technical aspects like computation time, disk space needed for each scenario in the repository, or post processing techniques have a much larger impact than they had in the beginning when TsunAWI started as a research code. Of course, careful testing on artificial benchmarks and real events remains essential, but furthermore, quality control for the large number of scenarios becomes an important issue.

  19. Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    NASA Astrophysics Data System (ADS)

    Ahn, Chang-Jun

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  20. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  1. Comparative genomic analysis of 26 Sphingomonas and Sphingobium strains: Dissemination of bioremediation capabilities, biodegradation potential and horizontal gene transfer.

    PubMed

    Zhao, Qiang; Yue, Shengjie; Bilal, Muhammad; Hu, Hongbo; Wang, Wei; Zhang, Xuehong

    2017-12-31

    Bacteria belonging to the genera Sphingomonas and Sphingobium are known for their ability to catabolize aromatic compounds. In this study, we analyzed the whole genome sequences of 26 strains in the genera Sphingomonas and Sphingobium to gain insight into dissemination of bioremediation capabilities, biodegradation potential, central pathways and genome plasticity. Phylogenetic analysis revealed that both Sphingomonas sp. strain BHC-A and Sphingomonas paucimobilis EPA505 should be placed in the genus Sphingobium. The bph and xyl gene cluster was found in 6 polycyclic aromatic hydrocarbons-degrading strains. Transposase and IS coding genes were found in the 6 gene clusters, suggesting the mobility of bph and xyl gene clusters. β-ketoadipate and homogentisate pathways were the main central pathways in Sphingomonas and Sphingobium strains. A large number of oxygenase coding genes were predicted in the 26 genomes, indicating a huge biodegradation potential of the Sphingomonas and Sphingobium strains. Horizontal gene transfer related genes and prophages were predicted in the analyzed strains, suggesting the ongoing evolution and shaping of the genomes. Analysis of the 26 genomes in this work contributes to the understanding of dispersion of bioremediation capabilities, bioremediation potential and genome plasticity in strains belonging to the genera Sphingomonas and Sphingobium. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  4. A Comparative Encyclopedia of DNA Elements in the Mouse Genome

    PubMed Central

    Yue, Feng; Cheng, Yong; Breschi, Alessandra; Vierstra, Jeff; Wu, Weisheng; Ryba, Tyrone; Sandstrom, Richard; Ma, Zhihai; Davis, Carrie; Pope, Benjamin D.; Shen, Yin; Pervouchine, Dmitri D.; Djebali, Sarah; Thurman, Bob; Kaul, Rajinder; Rynes, Eric; Kirilusha, Anthony; Marinov, Georgi K.; Williams, Brian A.; Trout, Diane; Amrhein, Henry; Fisher-Aylor, Katherine; Antoshechkin, Igor; DeSalvo, Gilberto; See, Lei-Hoon; Fastuca, Meagan; Drenkow, Jorg; Zaleski, Chris; Dobin, Alex; Prieto, Pablo; Lagarde, Julien; Bussotti, Giovanni; Tanzer, Andrea; Denas, Olgert; Li, Kanwei; Bender, M. A.; Zhang, Miaohua; Byron, Rachel; Groudine, Mark T.; McCleary, David; Pham, Long; Ye, Zhen; Kuan, Samantha; Edsall, Lee; Wu, Yi-Chieh; Rasmussen, Matthew D.; Bansal, Mukul S.; Keller, Cheryl A.; Morrissey, Christapher S.; Mishra, Tejaswini; Jain, Deepti; Dogan, Nergiz; Harris, Robert S.; Cayting, Philip; Kawli, Trupti; Boyle, Alan P.; Euskirchen, Ghia; Kundaje, Anshul; Lin, Shin; Lin, Yiing; Jansen, Camden; Malladi, Venkat S.; Cline, Melissa S.; Erickson, Drew T.; Kirkup, Vanessa M; Learned, Katrina; Sloan, Cricket A.; Rosenbloom, Kate R.; de Sousa, Beatriz Lacerda; Beal, Kathryn; Pignatelli, Miguel; Flicek, Paul; Lian, Jin; Kahveci, Tamer; Lee, Dongwon; Kent, W. James; Santos, Miguel Ramalho; Herrero, Javier; Notredame, Cedric; Johnson, Audra; Vong, Shinny; Lee, Kristen; Bates, Daniel; Neri, Fidencio; Diegel, Morgan; Canfield, Theresa; Sabo, Peter J.; Wilken, Matthew S.; Reh, Thomas A.; Giste, Erika; Shafer, Anthony; Kutyavin, Tanya; Haugen, Eric; Dunn, Douglas; Reynolds, Alex P.; Neph, Shane; Humbert, Richard; Hansen, R. Scott; De Bruijn, Marella; Selleri, Licia; Rudensky, Alexander; Josefowicz, Steven; Samstein, Robert; Eichler, Evan E.; Orkin, Stuart H.; Levasseur, Dana; Papayannopoulou, Thalia; Chang, Kai-Hsin; Skoultchi, Arthur; Gosh, Srikanta; Disteche, Christine; Treuting, Piper; Wang, Yanli; Weiss, Mitchell J.; Blobel, Gerd A.; Good, Peter J.; Lowdon, Rebecca F.; Adams, Leslie B.; Zhou, Xiao-Qiao; Pazin, Michael J.; Feingold, Elise A.; Wold, Barbara; Taylor, James; Kellis, Manolis; Mortazavi, Ali; Weissman, Sherman M.; Stamatoyannopoulos, John; Snyder, Michael P.; Guigo, Roderic; Gingeras, Thomas R.; Gilbert, David M.; Hardison, Ross C.; Beer, Michael A.; Ren, Bing

    2014-01-01

    Summary As the premier model organism in biomedical research, the laboratory mouse shares the majority of protein-coding genes with humans, yet the two mammals differ in significant ways. To gain greater insights into both shared and species-specific transcriptional and cellular regulatory programs in the mouse, the Mouse ENCODE Consortium has mapped transcription, DNase I hypersensitivity, transcription factor binding, chromatin modifications, and replication domains throughout the mouse genome in diverse cell and tissue types. By comparing with the human genome, we not only confirm substantial conservation in the newly annotated potential functional sequences, but also find a large degree of divergence of other sequences involved in transcriptional regulation, chromatin state and higher order chromatin organization. Our results illuminate the wide range of evolutionary forces acting on genes and their regulatory regions, and provide a general resource for research into mammalian biology and mechanisms of human diseases. PMID:25409824

  5. A comparative encyclopedia of DNA elements in the mouse genome.

    PubMed

    Yue, Feng; Cheng, Yong; Breschi, Alessandra; Vierstra, Jeff; Wu, Weisheng; Ryba, Tyrone; Sandstrom, Richard; Ma, Zhihai; Davis, Carrie; Pope, Benjamin D; Shen, Yin; Pervouchine, Dmitri D; Djebali, Sarah; Thurman, Robert E; Kaul, Rajinder; Rynes, Eric; Kirilusha, Anthony; Marinov, Georgi K; Williams, Brian A; Trout, Diane; Amrhein, Henry; Fisher-Aylor, Katherine; Antoshechkin, Igor; DeSalvo, Gilberto; See, Lei-Hoon; Fastuca, Meagan; Drenkow, Jorg; Zaleski, Chris; Dobin, Alex; Prieto, Pablo; Lagarde, Julien; Bussotti, Giovanni; Tanzer, Andrea; Denas, Olgert; Li, Kanwei; Bender, M A; Zhang, Miaohua; Byron, Rachel; Groudine, Mark T; McCleary, David; Pham, Long; Ye, Zhen; Kuan, Samantha; Edsall, Lee; Wu, Yi-Chieh; Rasmussen, Matthew D; Bansal, Mukul S; Kellis, Manolis; Keller, Cheryl A; Morrissey, Christapher S; Mishra, Tejaswini; Jain, Deepti; Dogan, Nergiz; Harris, Robert S; Cayting, Philip; Kawli, Trupti; Boyle, Alan P; Euskirchen, Ghia; Kundaje, Anshul; Lin, Shin; Lin, Yiing; Jansen, Camden; Malladi, Venkat S; Cline, Melissa S; Erickson, Drew T; Kirkup, Vanessa M; Learned, Katrina; Sloan, Cricket A; Rosenbloom, Kate R; Lacerda de Sousa, Beatriz; Beal, Kathryn; Pignatelli, Miguel; Flicek, Paul; Lian, Jin; Kahveci, Tamer; Lee, Dongwon; Kent, W James; Ramalho Santos, Miguel; Herrero, Javier; Notredame, Cedric; Johnson, Audra; Vong, Shinny; Lee, Kristen; Bates, Daniel; Neri, Fidencio; Diegel, Morgan; Canfield, Theresa; Sabo, Peter J; Wilken, Matthew S; Reh, Thomas A; Giste, Erika; Shafer, Anthony; Kutyavin, Tanya; Haugen, Eric; Dunn, Douglas; Reynolds, Alex P; Neph, Shane; Humbert, Richard; Hansen, R Scott; De Bruijn, Marella; Selleri, Licia; Rudensky, Alexander; Josefowicz, Steven; Samstein, Robert; Eichler, Evan E; Orkin, Stuart H; Levasseur, Dana; Papayannopoulou, Thalia; Chang, Kai-Hsin; Skoultchi, Arthur; Gosh, Srikanta; Disteche, Christine; Treuting, Piper; Wang, Yanli; Weiss, Mitchell J; Blobel, Gerd A; Cao, Xiaoyi; Zhong, Sheng; Wang, Ting; Good, Peter J; Lowdon, Rebecca F; Adams, Leslie B; Zhou, Xiao-Qiao; Pazin, Michael J; Feingold, Elise A; Wold, Barbara; Taylor, James; Mortazavi, Ali; Weissman, Sherman M; Stamatoyannopoulos, John A; Snyder, Michael P; Guigo, Roderic; Gingeras, Thomas R; Gilbert, David M; Hardison, Ross C; Beer, Michael A; Ren, Bing

    2014-11-20

    The laboratory mouse shares the majority of its protein-coding genes with humans, making it the premier model organism in biomedical research, yet the two mammals differ in significant ways. To gain greater insights into both shared and species-specific transcriptional and cellular regulatory programs in the mouse, the Mouse ENCODE Consortium has mapped transcription, DNase I hypersensitivity, transcription factor binding, chromatin modifications and replication domains throughout the mouse genome in diverse cell and tissue types. By comparing with the human genome, we not only confirm substantial conservation in the newly annotated potential functional sequences, but also find a large degree of divergence of sequences involved in transcriptional regulation, chromatin state and higher order chromatin organization. Our results illuminate the wide range of evolutionary forces acting on genes and their regulatory regions, and provide a general resource for research into mammalian biology and mechanisms of human diseases.

  6. Therapeutically induced changes in couple identity: the role of we-ness and interpersonal processing in relationship satisfaction.

    PubMed

    Reid, David W; Dalton, E Jane; Laderoute, Kristine; Doell, Faye K; Nguyen, Thao

    2006-08-01

    Changes in partners' sense of self-in-relationship, which a systemic-constructivist couple therapy (SCCT) induced, led to robust improvement in satisfaction in 2 studies and a follow-up study. In each study, 13 referred couples completed measures of satisfaction, mutuality, similarities, and other-in-self construal pre-post 12 hours of SCCT. The authors reliably coded transcripts of 1st and final sessions for each partner's we-ness, the identity that each partner establishes in relationship to the other. Having met the criteria for the rigorous study of change in single group process-outcome design, changes in we-ness accompanied large posttherapy dyadic increments on all variables in each study. Therapeutic gains appeared at follow-up and correlated with increased we-ness obtained at end of therapy 2 years earlier. The authors raise theoretical implications for all types of couple therapies and explain clinical techniques.

  7. Sequencing of Seven Haloarchaeal Genomes Reveals Patterns of Genomic Flux

    PubMed Central

    Lynch, Erin A.; Langille, Morgan G. I.; Darling, Aaron; Wilbanks, Elizabeth G.; Haltiner, Caitlin; Shao, Katie S. Y.; Starr, Michael O.; Teiling, Clotilde; Harkins, Timothy T.; Edwards, Robert A.; Eisen, Jonathan A.; Facciotti, Marc T.

    2012-01-01

    We report the sequencing of seven genomes from two haloarchaeal genera, Haloferax and Haloarcula. Ease of cultivation and the existence of well-developed genetic and biochemical tools for several diverse haloarchaeal species make haloarchaea a model group for the study of archaeal biology. The unique physiological properties of these organisms also make them good candidates for novel enzyme discovery for biotechnological applications. Seven genomes were sequenced to ∼20×coverage and assembled to an average of 50 contigs (range 5 scaffolds - 168 contigs). Comparisons of protein-coding gene compliments revealed large-scale differences in COG functional group enrichment between these genera. Analysis of genes encoding machinery for DNA metabolism reveals genera-specific expansions of the general transcription factor TATA binding protein as well as a history of extensive duplication and horizontal transfer of the proliferating cell nuclear antigen. Insights gained from this study emphasize the importance of haloarchaea for investigation of archaeal biology. PMID:22848480

  8. Adaptation to stimulus statistics in the perception and neural representation of auditory space.

    PubMed

    Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J

    2010-06-24

    Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.

  9. Large-scale whole-genome sequencing of the Icelandic population.

    PubMed

    Gudbjartsson, Daniel F; Helgason, Hannes; Gudjonsson, Sigurjon A; Zink, Florian; Oddson, Asmundur; Gylfason, Arnaldur; Besenbacher, Soren; Magnusson, Gisli; Halldorsson, Bjarni V; Hjartarson, Eirikur; Sigurdsson, Gunnar Th; Stacey, Simon N; Frigge, Michael L; Holm, Hilma; Saemundsdottir, Jona; Helgadottir, Hafdis Th; Johannsdottir, Hrefna; Sigfusson, Gunnlaugur; Thorgeirsson, Gudmundur; Sverrisson, Jon Th; Gretarsdottir, Solveig; Walters, G Bragi; Rafnar, Thorunn; Thjodleifsson, Bjarni; Bjornsson, Einar S; Olafsson, Sigurdur; Thorarinsdottir, Hildur; Steingrimsdottir, Thora; Gudmundsdottir, Thora S; Theodors, Asgeir; Jonasson, Jon G; Sigurdsson, Asgeir; Bjornsdottir, Gyda; Jonsson, Jon J; Thorarensen, Olafur; Ludvigsson, Petur; Gudbjartsson, Hakon; Eyjolfsson, Gudmundur I; Sigurdardottir, Olof; Olafsson, Isleifur; Arnar, David O; Magnusson, Olafur Th; Kong, Augustine; Masson, Gisli; Thorsteinsdottir, Unnur; Helgason, Agnar; Sulem, Patrick; Stefansson, Kari

    2015-05-01

    Here we describe the insights gained from sequencing the whole genomes of 2,636 Icelanders to a median depth of 20×. We found 20 million SNPs and 1.5 million insertions-deletions (indels). We describe the density and frequency spectra of sequence variants in relation to their functional annotation, gene position, pathway and conservation score. We demonstrate an excess of homozygosity and rare protein-coding variants in Iceland. We imputed these variants into 104,220 individuals down to a minor allele frequency of 0.1% and found a recessive frameshift mutation in MYL4 that causes early-onset atrial fibrillation, several mutations in ABCB4 that increase risk of liver diseases and an intronic variant in GNAS associating with increased thyroid-stimulating hormone levels when maternally inherited. These data provide a study design that can be used to determine how variation in the sequence of the human genome gives rise to human diversity.

  10. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  11. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. New Method for Producing Significant Amounts of RNA Labeled at Specific Sites | Center for Cancer Research

    Cancer.gov

    Among biomacromolecules, RNA is the most versatile, and it plays indispensable roles in almost all aspects of biology. For example, in addition to serving as mRNAs coding for proteins, RNAs regulate gene expression, such as controlling where, when, and how efficiently a gene gets expressed, participate in RNA processing, encode the genetic information of some viruses, serve as scaffolds, and even possess enzymatic activity. To study these RNAs and their biological functions and to make use of those RNA activities for biomedical applications, researchers first need to make various types of RNA. For structural biologists incorporating modified or labeled nucleotides at specific sites in RNA molecules of interest is critical to gain structural insight into RNA functions. However, placing labeled or modified residue(s) in desired positions in a large RNA has not been possible until now.

  13. Evaluation of a bar-code system to detect unaccompanied baggage

    DOT National Transportation Integrated Search

    1988-02-01

    The objective of the Unaccompanied Baggage Detection System (UBDS) Project has : been to gain field experience with a system designed to identify passengers who : check baggage for a flight and subsequently fail to board that flight. In the first : p...

  14. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  15. Paternity testing in an autotetraploid alfalfa breeding polycross

    USDA-ARS?s Scientific Manuscript database

    Determining unknown parentage in autotetraploid alfalfa (Medicago sativa L.) (2n = 4x = 32) can improve breeding gains. Exclusion analysis based paternity testing SAS code is presented, amenable to genotyping errors, for autotetraploid species utilizing co-dominant molecular markers with ambiguous d...

  16. Ultra-high gain diffusion-driven organic transistor.

    PubMed

    Torricelli, Fabrizio; Colalongo, Luigi; Raiteri, Daniele; Kovács-Vajna, Zsolt Miklós; Cantatore, Eugenio

    2016-02-01

    Emerging large-area technologies based on organic transistors are enabling the fabrication of low-cost flexible circuits, smart sensors and biomedical devices. High-gain transistors are essential for the development of large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show a new organic field-effect transistor architecture with a gain larger than 700. This is the highest gain ever reported for organic field-effect transistors. In the proposed organic field-effect transistor, the charge injection and extraction at the metal-semiconductor contacts are driven by the charge diffusion. The ideal conditions of ohmic contacts with negligible contact resistance and flat current saturation are demonstrated. The approach is general and can be extended to any thin-film technology opening unprecedented opportunities for the development of high-performance flexible electronics.

  17. Ultra-high gain diffusion-driven organic transistor

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio; Colalongo, Luigi; Raiteri, Daniele; Kovács-Vajna, Zsolt Miklós; Cantatore, Eugenio

    2016-02-01

    Emerging large-area technologies based on organic transistors are enabling the fabrication of low-cost flexible circuits, smart sensors and biomedical devices. High-gain transistors are essential for the development of large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show a new organic field-effect transistor architecture with a gain larger than 700. This is the highest gain ever reported for organic field-effect transistors. In the proposed organic field-effect transistor, the charge injection and extraction at the metal-semiconductor contacts are driven by the charge diffusion. The ideal conditions of ohmic contacts with negligible contact resistance and flat current saturation are demonstrated. The approach is general and can be extended to any thin-film technology opening unprecedented opportunities for the development of high-performance flexible electronics.

  18. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  19. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  20. All fiber passively Q-switched laser

    DOEpatents

    Soh, Daniel B. S.; Bisson, Scott E

    2015-05-12

    Embodiments relate to an all fiber passively Q-switched laser. The laser includes a large core doped gain fiber having a first end. The large core doped gain fiber has a first core diameter. The laser includes a doped single mode fiber (saturable absorber) having a second core diameter that is smaller than the first core diameter. The laser includes a mode transformer positioned between a second end of the large core doped gain fiber and a first end of the single mode fiber. The mode transformer has a core diameter that transitions from the first core diameter to the second core diameter and filters out light modes not supported by the doped single mode fiber. The laser includes a laser cavity formed between a first reflector positioned adjacent the large core doped gain fiber and a second reflector positioned adjacent the doped single mode fiber.

  1. How are learning physics and student beliefs about learning physics connected? Measuring epistemological self-reflection in an introductory course and investigating its relationship to conceptual learning

    NASA Astrophysics Data System (ADS)

    May, David B.

    2002-11-01

    To explore students' epistemological beliefs in a variety of conceptual domains in physics, and in a specific and novel context of measurement, this Dissertation makes use of Weekly Reports, a class assignment in which students reflect in writing on what they learn each week and how they learn it. Reports were assigned to students in the introductory physics course for honors engineering majors at The Ohio State University in two successive years. The Weekly Reports of several students from the first year were analyzed for the kinds of epistemological beliefs exhibited therein, called epistemological self-reflection, and a coding scheme was developed for categorizing and quantifying this reflection. The connection between epistemological self-reflection and conceptual learning in physics seen in a pilot study was replicated in a larger study, in which the coded reflections from the Weekly Reports of thirty students were correlated with their conceptual learning gains. Although the total amount of epistemological self-reflection was not found to be related to conceptual gain, different kinds of epistemological self-reflection were. Describing learning physics concepts in terms of logical reasoning and making personal connections were positively correlated with gains; describing learning from authority figures or by observing phenomena without making inferences were negatively correlated. Linear regression equations were determined in order to quantify the effects on conceptual gain of specific ways of describing learning. In an experimental test of this model, the regression equations and the Weekly Report coding scheme developed from the first year's data were used to predict the conceptual gains of thirty students from the second year. The prediction was unsuccessful, possibly because these students were not given as much feedback on their reflections as were the first-year students. These results show that epistemological beliefs are important factors affecting the conceptual learning of physics students. Also, getting students to reflect meaningfully on their knowledge and learning is difficult and requires consistent feedback. Research into the epistemological beliefs of physics students in different contexts and from different populations can help us develop more complete models of epistemological beliefs, and ultimately improve the conceptual and epistemological knowledge of all students.

  2. Tuning time-frequency methods for the detection of metered HF speech

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas J.; Smith, Lawrence H.

    2002-12-01

    Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.

  3. Inclusion of pressure and flow in a new 3D MHD equilibrium code

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel; Fukuyama, Atsushi

    2012-10-01

    Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).

  4. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  5. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  6. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  7. Perceptions of low-income African-American mothers about excessive gestational weight gain.

    PubMed

    Herring, Sharon J; Henry, Tasmia Q; Klotz, Alicia A; Foster, Gary D; Whitaker, Robert C

    2012-12-01

    A rising number of low-income African-American mothers gain more weight in pregnancy than is recommended, placing them at risk for poor maternal and fetal health outcomes. Little is known about the perceptions of mothers in this population that may influence excessive gestational weight gain. In 2010-2011, we conducted 4 focus groups with 31 low-income, pregnant African-Americans in Philadelphia. Two readers independently coded the focus group transcripts to identify recurrent themes. We identified 9 themes around perceptions that encouraged or discouraged high gestational weight gain. Mothers attributed high weight gain to eating more in pregnancy, which was the result of being hungrier and the belief that consuming more calories while pregnant was essential for babies' health. Family members, especially participants own mothers, strongly reinforced the need to "eat for two" to make a healthy baby. Mothers and their families recognized the link between poor fetal outcomes and low weight gains but not higher gains, and thus, most had a greater pre-occupation with too little food intake and weight gain rather than too much. Having physical symptoms from overeating and weight retention after previous pregnancies were factors that discouraged higher gains. Overall, low-income African-American mothers had more perceptions encouraging high gestational weight gain than discouraging it. Interventions to prevent excessive weight gain need to be sensitive to these perceptions. Messages that link guideline recommended weight gain to optimal infant outcomes and mothers' physical symptoms may be most effective for weight control.

  8. Ultra Safe And Secure Blasting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, M M

    2009-07-27

    The Ultra is a blasting system that is designed for special applications where the risk and consequences of unauthorized demolition or blasting are so great that the use of an extraordinarily safe and secure blasting system is justified. Such a blasting system would be connected and logically welded together through digital code-linking as part of the blasting system set-up and initialization process. The Ultra's security is so robust that it will defeat the people who designed and built the components in any attempt at unauthorized detonation. Anyone attempting to gain unauthorized control of the system by substituting components or tappingmore » into communications lines will be thwarted in their inability to provide encrypted authentication. Authentication occurs through the use of codes that are generated by the system during initialization code-linking and the codes remain unknown to anyone, including the authorized operator. Once code-linked, a closed system has been created. The system requires all components connected as they were during initialization as well as a unique code entered by the operator for function and blasting.« less

  9. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  10. Comparisons of 'Identical' Simulations by the Eulerian Gyrokinetic Codes GS2 and GYRO

    NASA Astrophysics Data System (ADS)

    Bravenec, R. V.; Ross, D. W.; Candy, J.; Dorland, W.; McKee, G. R.

    2003-10-01

    A major goal of the fusion program is to be able to predict tokamak transport from first-principles theory. To this end, the Eulerian gyrokinetic code GS2 was developed years ago and continues to be improved [1]. Recently, the Eulerian code GYRO was developed [2]. These codes are not subject to the statistical noise inherent to particle-in-cell (PIC) codes, and have been very successful in treating electromagnetic fluctuations. GS2 is fully spectral in the radial coordinate while GYRO uses finite-differences and ``banded" spectral schemes. To gain confidence in nonlinear simulations of experiment with these codes, ``apples-to-apples" comparisons (identical profile inputs, flux-tube geometry, two species, etc.) are first performed. We report on a series of linear and nonlinear comparisons (with overall agreement) including kinetic electrons, collisions, and shaped flux surfaces. We also compare nonlinear simulations of a DIII-D discharge to measurements of not only the fluxes but also the turbulence parameters. [1] F. Jenko, et al., Phys. Plasmas 7, 1904 (2000) and refs. therein. [2] J. Candy, J. Comput. Phys. 186, 545 (2003).

  11. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  12. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  13. Stable Short-Term Frequency Support Using Adaptive Gains for a DFIG-Based Wind Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jinsik; Jang, Gilsoo; Muljadi, Eduard

    For the fixed-gain inertial control of wind power plants (WPPs), a large gain setting provides a large contribution to supporting system frequency control, but it may cause over-deceleration for a wind turbine generator that has a small amount of kinetic energy (KE). Further, if the wind speed decreases during inertial control, even a small gain may cause over-deceleration. This paper proposes a stable inertial control scheme using adaptive gains for a doubly fed induction generator (DFIG)-based WPP. The scheme aims to improve the frequency nadir (FN) while ensuring stable operation of all DFIGs, particularly when the wind speed decreases duringmore » inertial control. In this scheme, adaptive gains are set to be proportional to the KE stored in DFIGs, which is spatially and temporally dependent. To improve the FN, upon detecting an event, large gains are set to be proportional to the KE of DFIGs; to ensure stable operation, the gains decrease with the declining KE. The simulation results demonstrate that the scheme improves the FN while ensuring stable operation of all DFIGs in various wind and system conditions. Further, it prevents over-deceleration even when the wind speed decreases during inertial control.« less

  14. Physical-layer network coding for passive optical interconnect in datacenter networks.

    PubMed

    Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia

    2017-07-24

    We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.

  15. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  16. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  17. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  18. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition

    PubMed Central

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A.

    2016-01-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. PMID:26209846

  19. The effect of multiple internal representations on context-rich instruction

    NASA Astrophysics Data System (ADS)

    Lasry, Nathaniel; Aulls, Mark W.

    2007-11-01

    We discuss n-coding, a theoretical model of multiple internal mental representations. The n-coding construct is developed from a review of cognitive and imaging data that demonstrates the independence of information processed along different modalities such as verbal, visual, kinesthetic, logico-mathematic, and social modalities. A study testing the effectiveness of the n-coding construct in classrooms is presented. Four sections differing in the level of n-coding opportunities were compared. Besides a traditional-instruction section used as a control group, each of the remaining three sections were given context-rich problems, which differed by the level of n-coding opportunities designed into their laboratory environment. To measure the effectiveness of the construct, problem-solving skills were assessed as conceptual learning using the force concept inventory. We also developed several new measures that take students' confidence in concepts into account. Our results show that the n-coding construct is useful in designing context-rich environments and can be used to increase learning gains in problem solving, conceptual knowledge, and concept confidence. Specifically, when using props in designing context-rich problems, we find n-coding to be a useful construct in guiding which additional dimensions need to be attended to.

  20. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  1. The obligations of Member Countries of the OIE (World Organisation for Animal Health) in the Organisation of Veterinary Services.

    PubMed

    Vallat, B; Wilson, D W

    2003-08-01

    The authors discuss the mission, organisation and resources of Veterinary Services in the new international trading environment and examine how the standards for Veterinary Services, contained in the OIE (World Organisation for Animal Health) International Animal Health Code (the Code), help provide the necessary support for Veterinary Services to meet their rights and obligations under the provisions of the Sanitary and Phytosanitary (SPS) Agreement of the World Trade Organization (WTO). The authors describe the challenges of gaining access to international trading markets through surveillance and control of OIE listed diseases. Finally, the approach in the Code to the principles underpinning the quality of Veterinary Services and to guidelines for evaluating Veterinary Services, is discussed.

  2. Comparison of Stopping Power and Range Databases for Radiation Transport Study

    NASA Technical Reports Server (NTRS)

    Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.

    1997-01-01

    The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.

  3. Practical guide to bar coding for patient medication safety.

    PubMed

    Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara

    2003-04-15

    Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.

  4. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  5. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  6. Simulation studies of hydrodynamic aspects of magneto-inertial fusion and high order adaptive algorithms for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Wu, Lingling

    Three-dimensional simulations of the formation and implosion of plasma liners for the Plasma Jet Induced Magneto Inertial Fusion (PJMIF) have been performed using multiscale simulation technique based on the FronTier code. In the PJMIF concept, a plasma liner, formed by merging of a large number of radial, highly supersonic plasma jets, implodes on the target in the form of two compact plasma toroids, and compresses it to conditions of the nuclear fusion ignition. The propagation of a single jet with Mach number 60 from the plasma gun to the merging point was studied using the FronTier code. The simulation result was used as input to the 3D jet merger problem. The merger of 144, 125, and 625 jets and the formation and heating of plasma liner by compression waves have been studied and compared with recent theoretical predictions. The main result of the study is the prediction of the average Mach number reduction and the description of the liner structure and properties. We have also compared the effect of different merging radii. Spherically symmetric simulations of the implosion of plasma liners and compression of plasma targets have also been performed using the method of front tracking. The cases of single deuterium and xenon liners and double layer deuterium - xenon liners compressing various deuterium-tritium targets have been investigated, optimized for maximum fusion energy gains, and compared with theoretical predictions and scaling laws of [P. Parks, On the efficacy of imploding plasma liners for magnetized fusion target compression, Phys. Plasmas 15, 062506 (2008)]. In agreement with the theory, the fusion gain was significantly below unity for deuterium - tritium targets compressed by Mach 60 deuterium liners. In the most optimal setup for a given chamber size that contained a target with the initial radius of 20 cm compressed by 10 cm thick, Mach 60 xenon liner, the target ignition and fusion energy gain of 10 was achieved. Simulations also showed that composite deuterium - xenon liners reduce the energy gain due to lower target compression rates. The effect of heating of targets by alpha particles on the fusion energy gain has also been investigated. The study of the dependence of the ram pressure amplification on radial compressibility showed a good agreement with the theory. The study concludes that a liner with higher Mach number and lower adiabatic index gamma (the radio of specific heats) will generate higher ram pressure amplification and higher fusion energy gain. We implemented a second order embedded boundary method for the Maxwell equations in geometrically complex domains. The numerical scheme is second order in both space and time. Comparing to the first order stair-step approximation of complex geometries within the FDTD method, this method can avoid spurious solution introduced by the stair step approximation. Unlike the finite element method and the FE-FD hybrid method, no triangulation is needed for this scheme. This method preserves the simplicity of the embedded boundary method and it is easy to implement. We will also propose a conservative (symplectic) fourth order scheme for uniform geometry boundary.

  7. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  8. Updated clusters of orthologous genes for Archaea: a complex ancestor of the Archaea and the byways of horizontal gene transfer.

    PubMed

    Wolf, Yuri I; Makarova, Kira S; Yutin, Natalya; Koonin, Eugene V

    2012-12-14

    Collections of Clusters of Orthologous Genes (COGs) provide indispensable tools for comparative genomic analysis, evolutionary reconstruction and functional annotation of new genomes. Initially, COGs were made for all complete genomes of cellular life forms that were available at the time. However, with the accumulation of thousands of complete genomes, construction of a comprehensive COG set has become extremely computationally demanding and prone to error propagation, necessitating the switch to taxon-specific COG collections. Previously, we reported the collection of COGs for 41 genomes of Archaea (arCOGs). Here we present a major update of the arCOGs and describe evolutionary reconstructions to reveal general trends in the evolution of Archaea. The updated version of the arCOG database incorporates 91% of the pangenome of 120 archaea (251,032 protein-coding genes altogether) into 10,335 arCOGs. Using this new set of arCOGs, we performed maximum likelihood reconstruction of the genome content of archaeal ancestral forms and gene gain and loss events in archaeal evolution. This reconstruction shows that the last Common Ancestor of the extant Archaea was an organism of greater complexity than most of the extant archaea, probably with over 2,500 protein-coding genes. The subsequent evolution of almost all archaeal lineages was apparently dominated by gene loss resulting in genome streamlining. Overall, in the evolution of Archaea as well as a representative set of bacteria that was similarly analyzed for comparison, gene losses are estimated to outnumber gene gains at least 4 to 1. Analysis of specific patterns of gene gain in Archaea shows that, although some groups, in particular Halobacteria, acquire substantially more genes than others, on the whole, gene exchange between major groups of Archaea appears to be largely random, with no major 'highways' of horizontal gene transfer. The updated collection of arCOGs is expected to become a key resource for comparative genomics, evolutionary reconstruction and functional annotation of new archaeal genomes. Given that, in spite of the major increase in the number of genomes, the conserved core of archaeal genes appears to be stabilizing, the major evolutionary trends revealed here have a chance to stand the test of time. This article was reviewed by (for complete reviews see the Reviewers' Reports section): Dr. PLG, Prof. PF, Dr. PL (nominated by Prof. JPG).

  9. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  10. Genome fluctuations in cyanobacteria reflect evolutionary, developmental and adaptive traits.

    PubMed

    Larsson, John; Nylander, Johan Aa; Bergman, Birgitta

    2011-06-30

    Cyanobacteria belong to an ancient group of photosynthetic prokaryotes with pronounced variations in their cellular differentiation strategies, physiological capacities and choice of habitat. Sequencing efforts have shown that genomes within this phylum are equally diverse in terms of size and protein-coding capacity. To increase our understanding of genomic changes in the lineage, the genomes of 58 contemporary cyanobacteria were analysed for shared and unique orthologs. A total of 404 protein families, present in all cyanobacterial genomes, were identified. Two of these are unique to the phylum, corresponding to an AbrB family transcriptional regulator and a gene that escapes functional annotation although its genomic neighbourhood is conserved among the organisms examined. The evolution of cyanobacterial genome sizes involves a mix of gains and losses in the clade encompassing complex cyanobacteria, while a single event of reduction is evident in a clade dominated by unicellular cyanobacteria. Genome sizes and gene family copy numbers evolve at a higher rate in the former clade, and multi-copy genes were predominant in large genomes. Orthologs unique to cyanobacteria exhibiting specific characteristics, such as filament formation, heterocyst differentiation, diazotrophy and symbiotic competence, were also identified. An ancestral character reconstruction suggests that the most recent common ancestor of cyanobacteria had a genome size of approx. 4.5 Mbp and 1678 to 3291 protein-coding genes, 4%-6% of which are unique to cyanobacteria today. The different rates of genome-size evolution and multi-copy gene abundance suggest two routes of genome development in the history of cyanobacteria. The expansion strategy is driven by gene-family enlargment and generates a broad adaptive potential; while the genome streamlining strategy imposes adaptations to highly specific niches, also reflected in their different functional capacities. A few genomes display extreme proliferation of non-coding nucleotides which is likely to be the result of initial expansion of genomes/gene copy number to gain adaptive potential, followed by a shift to a life-style in a highly specific niche (e.g. symbiosis). This transition results in redundancy of genes and gene families, leading to an increase in junk DNA and eventually to gene loss. A few orthologs can be correlated with specific phenotypes in cyanobacteria, such as filament formation and symbiotic competence; these constitute exciting exploratory targets.

  11. Genome fluctuations in cyanobacteria reflect evolutionary, developmental and adaptive traits

    PubMed Central

    2011-01-01

    Background Cyanobacteria belong to an ancient group of photosynthetic prokaryotes with pronounced variations in their cellular differentiation strategies, physiological capacities and choice of habitat. Sequencing efforts have shown that genomes within this phylum are equally diverse in terms of size and protein-coding capacity. To increase our understanding of genomic changes in the lineage, the genomes of 58 contemporary cyanobacteria were analysed for shared and unique orthologs. Results A total of 404 protein families, present in all cyanobacterial genomes, were identified. Two of these are unique to the phylum, corresponding to an AbrB family transcriptional regulator and a gene that escapes functional annotation although its genomic neighbourhood is conserved among the organisms examined. The evolution of cyanobacterial genome sizes involves a mix of gains and losses in the clade encompassing complex cyanobacteria, while a single event of reduction is evident in a clade dominated by unicellular cyanobacteria. Genome sizes and gene family copy numbers evolve at a higher rate in the former clade, and multi-copy genes were predominant in large genomes. Orthologs unique to cyanobacteria exhibiting specific characteristics, such as filament formation, heterocyst differentiation, diazotrophy and symbiotic competence, were also identified. An ancestral character reconstruction suggests that the most recent common ancestor of cyanobacteria had a genome size of approx. 4.5 Mbp and 1678 to 3291 protein-coding genes, 4%-6% of which are unique to cyanobacteria today. Conclusions The different rates of genome-size evolution and multi-copy gene abundance suggest two routes of genome development in the history of cyanobacteria. The expansion strategy is driven by gene-family enlargment and generates a broad adaptive potential; while the genome streamlining strategy imposes adaptations to highly specific niches, also reflected in their different functional capacities. A few genomes display extreme proliferation of non-coding nucleotides which is likely to be the result of initial expansion of genomes/gene copy number to gain adaptive potential, followed by a shift to a life-style in a highly specific niche (e.g. symbiosis). This transition results in redundancy of genes and gene families, leading to an increase in junk DNA and eventually to gene loss. A few orthologs can be correlated with specific phenotypes in cyanobacteria, such as filament formation and symbiotic competence; these constitute exciting exploratory targets. PMID:21718514

  12. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  13. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  14. WE-E-18A-01: Large Area Avalanche Amorphous Selenium Sensors for Low Dose X-Ray Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheuermann, J; Goldan, A; Zhao, W

    2014-06-15

    Purpose: A large area indirect flat panel imager (FPI) with avalanche gain is being developed to achieve x-ray quantum noise limited low dose imaging. It uses a thin optical sensing layer of amorphous selenium (a-Se), known as High-Gain Avalanche Rushing Photoconductor (HARP), to detect optical photons generated from a high resolution x-ray scintillator. We will report initial results in the fabrication of a solid-state HARP structure suitable for a large area FPI. Our objective is to establish the blocking layer structures and defect suppression mechanisms that provide stable and uniform avalanche gain. Methods: Samples were fabricated as follows: (1) ITOmore » signal electrode. (2) Electron blocking layer. (3) A 15 micron layer of intrinsic a-Se. (4) Transparent hole blocking layer. (5) Multiple semitransparent bias electrodes to investigate avalanche gain uniformity over a large area. The sample was exposed to 50ps optical excitation pulses through the bias electrode. Transient time of flight (TOF) and integrated charge was measured. A charge transport simulation was developed to investigate the effects of varying blocking layer charge carrier mobility on defect suppression, avalanche gain and temporal performance. Results: Avalanche gain of ∼200 was achieved experimentally with our multi-layer HARP samples. Simulations using the experimental sensor structure produced the same magnitude of gain as a function of electric field. The simulation predicted that the high dark current at a point defect can be reduced by two orders of magnitude by blocking layer optimization which can prevent irreversible damage while normal operation remained unaffected. Conclusion: We presented the first solid state HARP structure directly scalable to a large area FPI. We have shown reproducible and uniform avalanche gain of 200. By reducing mobility of the blocking layers we can suppress defects and maintain stable avalanche. Future work will optimize the blocking layers to prevent lag and ghosting.« less

  15. Ultra-high gain diffusion-driven organic transistor

    PubMed Central

    Torricelli, Fabrizio; Colalongo, Luigi; Raiteri, Daniele; Kovács-Vajna, Zsolt Miklós; Cantatore, Eugenio

    2016-01-01

    Emerging large-area technologies based on organic transistors are enabling the fabrication of low-cost flexible circuits, smart sensors and biomedical devices. High-gain transistors are essential for the development of large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show a new organic field-effect transistor architecture with a gain larger than 700. This is the highest gain ever reported for organic field-effect transistors. In the proposed organic field-effect transistor, the charge injection and extraction at the metal–semiconductor contacts are driven by the charge diffusion. The ideal conditions of ohmic contacts with negligible contact resistance and flat current saturation are demonstrated. The approach is general and can be extended to any thin-film technology opening unprecedented opportunities for the development of high-performance flexible electronics. PMID:26829567

  16. HUMAN DECISIONS AND MACHINE PREDICTIONS.

    PubMed

    Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil

    2018-02-01

    Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior).

  17. HUMAN DECISIONS AND MACHINE PREDICTIONS*

    PubMed Central

    Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil

    2018-01-01

    Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior) PMID:29755141

  18. Factors affecting commercial application of embryo technologies in dairy cattle in Europe--a modelling approach.

    PubMed

    van Arendonk, Johan A M; Bijma, Piter

    2003-01-15

    Reproductive techniques have a major impact on the structure of breeding programmes, the rate of genetic gain and dissemination of genetic gain in populations. This manuscript reviews the impact of reproductive technologies on the underlying components of genetic gain and inbreeding with special reference to the role of female reproductive technology. Evaluation of alternative breeding schemes should be based on genetic gain while constraining inbreeding. Optimum breeding schemes can be characterised by: decreased importance of sib information; increased accuracy at the expense of intensity; and a factorial mating strategy. If large-scale embryo cloning becomes feasible, this will have a small impact on the rate of genetic gain but will have a large impact on the structure of breeding programmes.

  19. Perceptions of low-income African-American mothers about excessive gestational weight gain

    PubMed Central

    Herring, Sharon J.; Henry, Tasmia Q.; Klotz, Alicia; Foster, Gary D.; Whitaker, Robert C.

    2013-01-01

    Objective A rising number of low-income African-American mothers gain more weight in pregnancy than is recommended, placing them at risk for poor maternal and fetal health outcomes. Little is known about the perceptions of mothers in this population that may influence excessive gestational weight gain. Methods In 2010–2011, we conducted 4 focus groups with 31 low-income, pregnant African-Americans in Philadelphia. Two readers independently coded the focus group transcripts to identify recurrent themes. Results We identified 9 themes around perceptions that encouraged or discouraged high gestational weight gain. Mothers attributed high weight gain to eating more in pregnancy, which was the result of being hungrier and the belief that consuming more calories while pregnant was essential for babies’ health. Family members, especially participants own mothers, strongly reinforced the need to “eat for two” to make a healthy baby. Mothers and their families recognized the link between poor fetal outcomes and low weight gains but not higher gains, and thus, most had a greater pre-occupation with too little food intake and weight gain rather than too much. Having physical symptoms from overeating and weight retention after previous pregnancies were factors that discouraged higher gains. Conclusions Low-income African American mothers had more perceptions encouraging high gestational weight gain than discouraging it. Interventions to prevent excessive weight gain need to be sensitive to these perceptions. Messages that link guideline recommended weight gain to optimal infant outcomes and mothers’ physical symptoms may be most effective for weight control. PMID:22160656

  20. Summary and recent results from the NASA advanced High Speed Propeller Research Program

    NASA Technical Reports Server (NTRS)

    Mitchell, G. A.; Mikkelson, D. C.

    1982-01-01

    Advanced high-speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. The current status of the NASA research program on high-speed propeller aerodynamics, acoustics, and aeroelastics is described. Recent wind tunnel results for five 8- to 10-blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing near-field cruise noise by dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some initial lifting line results are compared with propeller force and probe data. Some initial laser velocimeter measurements of the flow field velocities of an 8-bladed 45 deg swept propeller are shown. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller near-field noise data with linear acoustic theory indicate that the theory adequate predicts near-field noise for subsonic tip speeds but overpredicts the noise for supersonic tip speeds. Potential large gains in propeller efficiency of 7 to 11 percent at Mach 0.8 may be possible with advanced counter-rotation propellers.

  1. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  2. Peer Interaction in Three Collaborative Learning Environments

    ERIC Educational Resources Information Center

    Staarman, Judith Kleine; Krol, Karen; Meijden, Henny van der

    2005-01-01

    The aim of the study was to gain insight into the occurrence of different types of peer interaction and particularly the types of interaction beneficial for learning in different collaborative learning environments. Based on theoretical notions related to collaborative learning and peer interaction, a coding scheme was developed to analyze the…

  3. Professional Characteristics Communicated by Formal versus Casual Workplace Attire

    ERIC Educational Resources Information Center

    Cardon, Peter W.; Okoro, Ephraim A.

    2009-01-01

    Employees are frequently advised to dress for success to build their careers. From the corporate perspective, employees who are well dressed are believed to form better impressions with colleagues, clients, and customers. Many companies create dress codes in order to gain the benefits of a professionally appearing workforce. Developing effective…

  4. 75 FR 49026 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-12

    ... of Certain Natural Resource Recapture Property (Sections 1.1254-1(c)(3) and 1.1254-5(d)(2)). DATES... Certain Natural Resource Recapture Property. OMB Number: 1545-1352. Regulation Project Number: PS-276-76... of natural resource recapture property in accordance with Internal Revenue Code section 1254. Gain is...

  5. Understanding and Evaluating English Learners' Oral Reading with Miscue Analysis

    ERIC Educational Resources Information Center

    Latham Keh, Melissa

    2017-01-01

    Miscue analysis provides a unique opportunity to explore English learners' (ELs') oral reading from an asset-based perspective. This article focuses on insights about eight adolescent ELs' oral reading patterns that were gained through miscue analysis. The participants' miscues were coded with the Reading Miscue Inventory, and participants were…

  6. Back to the Source, or It's A You-Bet-Your-Business Game!

    ERIC Educational Resources Information Center

    Galvin, Wayne W.

    1987-01-01

    Many administrators are signing contracts for software products that leave their institutions completely unprotected in the event of a default by the vendor. It is proper for a customer to include contractual provisions whereby they may gain legal access to the program source code. (MLW)

  7. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  8. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  9. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  10. Evaluation of new techniques for the calculation of internal recirculating flows

    NASA Technical Reports Server (NTRS)

    Van Doormaal, J. P.; Turan, A.; Raithby, G. D.

    1987-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.

  11. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  12. Advanced Chemical Propulsion Study

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon; Byers, Dave; Alexander, Leslie A.; Krebsbach, Al

    2004-01-01

    A study was performed of advanced chemical propulsion technology application to space science (Code S) missions. The purpose was to begin the process of selecting chemical propulsion technology advancement activities that would provide greatest benefits to Code S missions. Several missions were selected from Code S planning data, and a range of advanced chemical propulsion options was analyzed to assess capabilities and benefits re these missions. Selected beneficial applications were found for higher-performing bipropellants, gelled propellants, and cryogenic propellants. Technology advancement recommendations included cryocoolers and small turbopump engines for cryogenic propellants; space storable propellants such as LOX-hydrazine; and advanced monopropellants. It was noted that fluorine-bearing oxidizers offer performance gains over more benign oxidizers. Potential benefits were observed for gelled propellants that could be allowed to freeze, then thawed for use.

  13. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  14. A Large Scale Code Resolution Service Network in the Internet of Things

    PubMed Central

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  15. A large scale code resolution service network in the Internet of Things.

    PubMed

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  16. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  17. Extrusion Process by Finite Volume Method Using OpenFoam Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matos Martins, Marcelo; Tonini Button, Sergio; Divo Bressan, Jose

    The computational codes are very important tools to solve engineering problems. In the analysis of metal forming process, such as extrusion, this is not different because the computational codes allow analyzing the process with reduced cost. Traditionally, the Finite Element Method is used to solve solid mechanic problems, however, the Finite Volume Method (FVM) have been gaining force in this field of applications. This paper presents the velocity field and friction coefficient variation results, obtained by numerical simulation using the OpenFoam Software and the FVM to solve an aluminum direct cold extrusion process.

  18. A Consistent System for Coding Laboratory Samples

    NASA Astrophysics Data System (ADS)

    Sih, John C.

    1996-07-01

    A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.

  19. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  20. The diagnosis related groups enhanced electronic medical record.

    PubMed

    Müller, Marcel Lucas; Bürkle, Thomas; Irps, Sebastian; Roeder, Norbert; Prokosch, Hans-Ulrich

    2003-07-01

    The introduction of Diagnosis Related Groups as a basis for hospital payment in Germany announced essential changes in the hospital reimbursement practice. A hospital's economical survival will depend vitally on the accuracy and completeness of the documentation of DRG relevant data like diagnosis and procedure codes. In order to enhance physicians' coding compliance, an easy-to-use interface integrating coding tasks seamlessly into clinical routine had to be developed. A generic approach should access coding and clinical guidelines from different information sources. Within the Electronic Medical Record (EMR) a user interface ('DRG Control Center') for all DRG relevant clinical and administrative data has been built. A comprehensive DRG-related web site gives online access to DRG grouping software and an electronic coding expert. Both components are linked together using an application supporting bi-directional communication. Other web based services like a guideline search engine can be integrated as well. With the proposed method, the clinician gains quick access to context sensitive clinical guidelines for appropriate treatment of his/her patient and administrative guidelines for the adequate coding of the diagnoses and procedures. This paper describes the design and current implementation and discusses our experiences.

  1. The impact of medical tourism and the code of medical ethics on advertisement in Nigeria

    PubMed Central

    Makinde, Olusesan Ayodeji; Brown, Brandon; Olaleye, Olalekan

    2014-01-01

    Advances in management of clinical conditions are being made in several resource poor countries including Nigeria. Yet, the code of medical ethics which bars physician and health practices from advertising the kind of services they render deters these practices. This is worsened by the incursion of medical tourism facilitators (MTF) who continue to market healthcare services across countries over the internet and social media thereby raising ethical questions. A significant review of the advertisement ban in the code of ethics is long overdue. Limited knowledge about advances in medical practice among physicians and the populace, the growing medical tourism industry and its attendant effects, and the possibility of driving brain gain provide evidence to repeal the code. Ethical issues, resistance to change and elitist ideas are mitigating factors working in the opposite direction. The repeal of the code of medical ethics against advertising will undoubtedly favor health facilities in the country that currently cannot advertise the kind of services they render. A repeal or review of this code of medical ethics is necessary with properly laid down guidelines on how advertisements can be and cannot be done. PMID:25722776

  2. The impact of medical tourism and the code of medical ethics on advertisement in Nigeria.

    PubMed

    Makinde, Olusesan Ayodeji; Brown, Brandon; Olaleye, Olalekan

    2014-01-01

    Advances in management of clinical conditions are being made in several resource poor countries including Nigeria. Yet, the code of medical ethics which bars physician and health practices from advertising the kind of services they render deters these practices. This is worsened by the incursion of medical tourism facilitators (MTF) who continue to market healthcare services across countries over the internet and social media thereby raising ethical questions. A significant review of the advertisement ban in the code of ethics is long overdue. Limited knowledge about advances in medical practice among physicians and the populace, the growing medical tourism industry and its attendant effects, and the possibility of driving brain gain provide evidence to repeal the code. Ethical issues, resistance to change and elitist ideas are mitigating factors working in the opposite direction. The repeal of the code of medical ethics against advertising will undoubtedly favor health facilities in the country that currently cannot advertise the kind of services they render. A repeal or review of this code of medical ethics is necessary with properly laid down guidelines on how advertisements can be and cannot be done.

  3. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  4. An in vitro ES cell imprinting model shows that imprinted expression of the Igf2r gene arises from an allele-specific expression bias

    PubMed Central

    Latos, Paulina A.; Stricker, Stefan H.; Steenpass, Laura; Pauler, Florian M.; Huang, Ru; Senergin, Basak H.; Regha, Kakkad; Koerner, Martha V.; Warczok, Katarzyna E.; Unger, Christine; Barlow, Denise P.

    2010-01-01

    Genomic imprinting is an epigenetic process that results in parental-specific gene expression. Advances in understanding the mechanism that regulates imprinted gene expression in mammals have largely depended on generating targeted manipulations in embryonic stem (ES) cells that are analysed in vivo in mice. However, genomic imprinting consists of distinct developmental steps, some of which occur in post-implantation embryos, indicating that they could be studied in vitro in ES cells. The mouse Igf2r gene shows imprinted expression only in post-implantation stages, when repression of the paternal allele has been shown to require cis-expression of the Airn non-coding (nc) RNA and to correlate with gain of DNA methylation and repressive histone modifications. Here we follow the gain of imprinted expression of Igf2r during in vitro ES cell differentiation and show that it coincides with the onset of paternal-specific expression of the Airn ncRNA. Notably, although Airn ncRNA expression leads, as predicted, to gain of repressive epigenetic marks on the paternal Igf2r promoter, we unexpectedly find that the paternal Igf2r promoter is expressed at similar low levels throughout ES cell differentiation. Our results further show that the maternal and paternal Igf2r promoters are expressed equally in undifferentiated ES cells, but during differentiation expression of the maternal Igf2r promoter increases up to 10-fold, while expression from the paternal Igf2r promoter remains constant. This indicates, contrary to expectation, that the Airn ncRNA induces imprinted Igf2r expression not by silencing the paternal Igf2r promoter, but by generating an expression bias between the two parental alleles. PMID:19141673

  5. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  7. Economic incentives and diagnostic coding in a public health care system.

    PubMed

    Anthun, Kjartan Sarheim; Bjørngaard, Johan Håkon; Magnussen, Jon

    2017-03-01

    We analysed the association between economic incentives and diagnostic coding practice in the Norwegian public health care system. Data included 3,180,578 hospital discharges in Norway covering the period 1999-2008. For reimbursement purposes, all discharges are grouped in diagnosis-related groups (DRGs). We examined pairs of DRGs where the addition of one or more specific diagnoses places the patient in a complicated rather than an uncomplicated group, yielding higher reimbursement. The economic incentive was measured as the potential gain in income by coding a patient as complicated, and we analysed the association between this gain and the share of complicated discharges within the DRG pairs. Using multilevel linear regression modelling, we estimated both differences between hospitals for each DRG pair and changes within hospitals for each DRG pair over time. Over the whole period, a one-DRG-point difference in price was associated with an increased share of complicated discharges of 14.2 (95 % confidence interval [CI] 11.2-17.2) percentage points. However, a one-DRG-point change in prices between years was only associated with a 0.4 (95 % CI [Formula: see text] to 1.8) percentage point change of discharges into the most complicated diagnostic category. Although there was a strong increase in complicated discharges over time, this was not as closely related to price changes as expected.

  8. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  9. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis

    PubMed Central

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B.; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain

    2017-01-01

    Abstract Background: The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Results: Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Conclusions: Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. PMID:28327993

  10. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis.

    PubMed

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain; Jelinsky, Scott A

    2017-05-01

    The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. © The Author 2017. Published by Oxford University Press.

  11. Thermal neutron filter design for the neutron radiography facility at the LVR-15 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltes, Jaroslav; Faculty of Nuclear Sciences and Physical Engineering, CTU in Prague,; Viererbl, Ladislav

    2015-07-01

    In 2011 a decision was made to build a neutron radiography facility at one of the unused horizontal channels of the LVR-15 research reactor in Rez, Czech Republic. One of the key conditions for operating an effective radiography facility is the delivery of a high intensity, homogeneous and collimated thermal neutron beam at the sample location. Additionally the intensity of fast neutrons has to be kept as low as possible as the fast neutrons may damage the detectors used for neutron imaging. As the spectrum in the empty horizontal channel roughly copies the spectrum in the reactor core, which hasmore » a high ratio of fast neutrons, neutron filter components have to be installed inside the channel in order to achieve desired beam parameters. As the channel design does not allow the instalment of complex filters and collimators, an optimal solution represent neutron filters made of large single-crystal ingots of proper material composition. Single-crystal silicon was chosen as a favorable filter material for its wide availability in sufficient dimensions. Besides its ability to reasonably lower the ratio of fast neutrons while still keeping high intensities of thermal neutrons, due to its large dimensions, it suits as a shielding against gamma radiation from the reactor core. For designing the necessary filter dimensions the Monte-Carlo MCNP transport code was used. As the code does not provide neutron cross-section libraries for thermal neutron transport through single-crystalline silicon, these had to be created by approximating the theory of thermal neutron scattering and modifying the original cross-section data which are provided with the code. Carrying out a series of calculations the filter thickness of 1 m proved good for gaining a beam with desired parameters and a low gamma background. After mounting the filter inside the channel several measurements of the neutron field were realized at the beam exit. The results have justified the expected calculated values. After the successful filter installing and a series of measurements, first test neutron radiography attempts with test samples could been carried out. (authors)« less

  12. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  13. Design of a Double Anode Magnetron Injection Gun for Q-band Gyro-TWT Using Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Li, Zhiliang; Feng, Jinjun; Liu, Bentian

    2018-04-01

    This paper presents a novel design code for double anode magnetron injection guns (MIGs) in gyro-devices based on boundary element method (BEM). The physical and mathematical models were constructed, and then the code using BEM for MIG's calculation was developed. Using the code, a double anode MIG for a Q-band gyrotron traveling-wave tube (gyro-TWT) amplifier operating in the circular TE01 mode at the fundamental cyclotron harmonic was designed. In order to verify the reliability of this code, velocity spread and guiding center radius of the MIG simulated by the BEM code were compared with these from the commonly used EGUN code, showing a reasonable agreement. Then, a Q-band gyro-TWT was fabricated and tested. The testing results show that the device has achieved an average power of 5kW and peak power ≥ 150 kW at a 3% duty cycle within bandwidth of 2 GHz, and maximum output peak power of 220 kW, with a corresponding saturated gain of 50.9 dB and efficiency of 39.8%. This paper demonstrates that the BEM code can be used as an effective approach for analysis of electron optics system in gyro-devices.

  14. Status of BOUT fluid turbulence code: improvements and verification

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Lodestro, L. L.; Xu, X. Q.

    2006-10-01

    BOUT is an electromagnetic fluid turbulence code for tokamak edge plasma [1]. BOUT performs time integration of reduced Braginskii plasma fluid equations, using spatial discretization in realistic geometry and employing a standard ODE integration package PVODE. BOUT has been applied to several tokamak experiments and in some cases calculated spectra of turbulent fluctuations compared favorably to experimental data. On the other hand, the desire to understand better the code results and to gain more confidence in it motivated investing effort in rigorous verification of BOUT. Parallel to the testing the code underwent substantial modification, mainly to improve its readability and tractability of physical terms, with some algorithmic improvements as well. In the verification process, a series of linear and nonlinear test problems was applied to BOUT, targeting different subgroups of physical terms. The tests include reproducing basic electrostatic and electromagnetic plasma modes in simplified geometry, axisymmetric benchmarks against the 2D edge code UEDGE in real divertor geometry, and neutral fluid benchmarks against the hydrodynamic code LCPFCT. After completion of the testing, the new version of the code is being applied to actual tokamak edge turbulence problems, and the results will be presented. [1] X. Q. Xu et al., Contr. Plas. Phys., 36,158 (1998). *Work performed for USDOE by Univ. Calif. LLNL under contract W-7405-ENG-48.

  15. Increasing Flexibility in Energy Code Compliance: Performance Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Rosenberg, Michael I.

    Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less

  16. Multiyear, Multi-Instructor Evaluation of a Large-Class Interactive-Engagement Curriculum

    ERIC Educational Resources Information Center

    Cahill, Michael J.; Hynes, K. Mairin; Trousil, Rebecca; Brooks, Lisa A.; McDaniel, Mark A.; Repice, Michelle; Zhao, Jiuqing; Frey, Regina F.

    2014-01-01

    Interactive-engagement (IE) techniques consistently enhance conceptual learning gains relative to traditional-lecture courses, but attitudinal gains typically emerge only in small, inquiry-based curricula. The current study evaluated whether a "scalable IE" curriculum--a curriculum used in a large course (~130 students per section) and…

  17. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    PubMed

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  18. A novel approach of an absolute coding pattern based on Hamiltonian graph

    NASA Astrophysics Data System (ADS)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  19. Coding of DNA samples and data in the pharmaceutical industry: current practices and future directions--perspective of the I-PWG.

    PubMed

    Franc, M A; Cohen, N; Warner, A W; Shaw, P M; Groenen, P; Snapir, A

    2011-04-01

    DNA samples collected in clinical trials and stored for future research are valuable to pharmaceutical drug development. Given the perceived higher risk associated with genetic research, industry has implemented complex coding methods for DNA. Following years of experience with these methods and with addressing questions from institutional review boards (IRBs), ethics committees (ECs) and health authorities, the industry has started reexamining the extent of the added value offered by these methods. With the goal of harmonization, the Industry Pharmacogenomics Working Group (I-PWG) conducted a survey to gain an understanding of company practices for DNA coding and to solicit opinions on their effectiveness at protecting privacy. The results of the survey and the limitations of the coding methods are described. The I-PWG recommends dialogue with key stakeholders regarding coding practices such that equal standards are applied to DNA and non-DNA samples. The I-PWG believes that industry standards for privacy protection should provide adequate safeguards for DNA and non-DNA samples/data and suggests a need for more universal standards for samples stored for future research.

  20. On the Study of Pre-Pregnancy Body Mass Index (BMI) and Weight Gain as Indicators of Nutritional Status of Pregnant Women Belonging to Low Socio-Economic Category: A Study from Assam.

    PubMed

    Mahanta, Lipi B; Choudhury, Manisha; Devi, Arundhuti; Bhattacharya, Arunima

    2015-01-01

    Women, particularly pregnant women, are the most vulnerable population of the society and their health status is one of the major indicators of development. There were enough studies on pre pregnancy body mass index (IPBMI) and inadequate weight gain during pregnancy (IWGP) of women in other part of the world and India, but none in Assam. In Assam a large number of population are in the category of low socio-economic group, a group most vulnerable to under nutrition. Thus this study was framed with the said indicators to throw light on the factors affecting the health status of pregnant women to accordingly address the situation. A cross sectional study using multistage sampling design with probability proportional to size was made comprising of 461 pregnant women belonging to low socio-economic status. Responses regarding their socio-economic, socio-cultural, health, diet and environmental background were collected and coded. The study revealed that although IPBMI (34.06%) was slightly lower than the reported state, national and global percentage the revealed IWGP (82%) was an astounding figure. The blood samples analyzed showed a high degree of inadequacy in almost all micronutrients (iron 63.1%, calcium 49.5% and copper 39.9%) studied in our survey.

  1. A model of self-directed learning in internal medicine residency: a qualitative study using grounded theory.

    PubMed

    Sawatsky, Adam P; Ratelle, John T; Bonnes, Sara L; Egginton, Jason S; Beckman, Thomas J

    2017-02-02

    Existing theories of self-directed learning (SDL) have emphasized the importance of process, personal, and contextual factors. Previous medical education research has largely focused on the process of SDL. We explored the experience with and perception of SDL among internal medicine residents to gain understanding of the personal and contextual factors of SDL in graduate medical education. Using a constructivist grounded theory approach, we conducted 7 focus group interviews with 46 internal medicine residents at an academic medical center. We processed the data by using open coding and writing analytic memos. Team members organized open codes to create axial codes, which were applied to all transcripts. Guided by a previous model of SDL, we developed a theoretical model that was revised through constant comparison with new data as they were collected, and we refined the theory until it had adequate explanatory power and was appropriately grounded in the experiences of residents. We developed a theoretical model of SDL to explain the process, personal, and contextual factors affecting SDL during residency training. The process of SDL began with a trigger that uncovered a knowledge gap. Residents progressed to formulating learning objectives, using resources, applying knowledge, and evaluating learning. Personal factors included motivations, individual characteristics, and the change in approach to SDL over time. Contextual factors included the need for external guidance, the influence of residency program structure and culture, and the presence of contextual barriers. We developed a theoretical model of SDL in medical education that can be used to promote and assess resident SDL through understanding the process, person, and context of SDL.

  2. Replacing the CCSDS Telecommand Protocol with the Next Generation Uplink (NGU)

    NASA Technical Reports Server (NTRS)

    Kazz, Greg J.; Greenberg, Ed; Burleigh, Scott C.

    2012-01-01

    The current CCSDS Telecommand (TC) Recommendations 1-3 have essentially been in use since the early 1960s. The purpose of this paper is to propose a successor protocol to TC. The current CCSDS recommendations can only accommodate telecommand rates up to approximately 1 mbit/s. However today's spacecraft are storehouses for software including software for Field Programmable Gate Arrays (FPGA) which are rapidly replacing unique hardware systems. Changes to flight software occasionally require uplinks to deliver very large volumes of data. In the opposite direction, high rate downlink missions that use acknowledged CCSDS File Delivery Protocol (CFDP)4 will increase the uplink data rate requirements. It is calculated that a 5 mbits/s downlink could saturate a 4 kbits/s uplink with CFDP downlink responses: negative acknowledgements (NAKs), FINISHs, End-of-File (EOF), Acknowledgements (ACKs). Moreover, it is anticipated that uplink rates of 10 to 20 mbits/s will be required to support manned missions. The current TC recommendations cannot meet these new demands. Specifically, they are very tightly coupled to the Bose-Chaudhuri-Hocquenghem (BCH) code in Ref. 2. This protocol requires that an uncorrectable BCH codeword delimit the TC frame and terminate the randomization process. This method greatly limits telecom performance since only the BCH code can support the protocol. More modern techniques such as the CCSDS Low Density Parity Check (LDPC)5 codes can provide a minimum performance gain of up to 6 times higher command data rates as long as sufficient power is available in the data. This paper will describe the proposed protocol format, trade-offs, and advantages offered, along with a discussion of how reliable communications takes place at higher nominal rates.

  3. PROFIT: Bayesian profile fitting of galaxy images

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D. S.; Tobar, R.; Moffett, A.; Driver, S. P.

    2017-04-01

    We present PROFIT, a new code for Bayesian two-dimensional photometric galaxy profile modelling. PROFIT consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (PROFIT) and PYTHON (PyProFit) interfaces (available at github.com/ICRAR/libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit, respectively). R PROFIT is also available pre-built from CRAN; however, this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two-dimensional integration for a useful number of profiles, including Sérsic, Core-Sérsic, broken-exponential, Ferrer, Moffat, empirical King, point-source, and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Sérsic profile for the most common values of the Sérsic index n (0.5 < n < 8). The high-level fitting code PROFIT is tested on a sample of galaxies with both SDSS and deeper KiDS imaging. We find good agreement in the fit parameters, with larger scatter in best-fitting parameters from fitting images from different sources (SDSS versus KiDS) than from using different codes (PROFIT versus GALFIT). A large suite of Monte Carlo-simulated images are used to assess prospects for automated bulge-disc decomposition with PROFIT on SDSS, KiDS, and future LSST imaging. We find that the biggest increases in fit quality come from moving from SDSS- to KiDS-quality data, with less significant gains moving from KiDS to LSST.

  4. [Benefits of large healthcare databases for drug risk research].

    PubMed

    Garbe, Edeltraut; Pigeot, Iris

    2015-08-01

    Large electronic healthcare databases have become an important worldwide data resource for drug safety research after approval. Signal generation methods and drug safety studies based on these data facilitate the prospective monitoring of drug safety after approval, as has been recently required by EU law and the German Medicines Act. Despite its large size, a single healthcare database may include insufficient patients for the study of a very small number of drug-exposed patients or the investigation of very rare drug risks. For that reason, in the United States, efforts have been made to work on models that provide the linkage of data from different electronic healthcare databases for monitoring the safety of medicines after authorization in (i) the Sentinel Initiative and (ii) the Observational Medical Outcomes Partnership (OMOP). In July 2014, the pilot project Mini-Sentinel included a total of 178 million people from 18 different US databases. The merging of the data is based on a distributed data network with a common data model. In the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCEPP) there has been no comparable merging of data from different databases; however, first experiences have been gained in various EU drug safety projects. In Germany, the data of the statutory health insurance providers constitute the most important resource for establishing a large healthcare database. Their use for this purpose has so far been severely restricted by the Code of Social Law (Section 75, Book 10). Therefore, a reform of this section is absolutely necessary.

  5. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  6. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  7. Large-scale transmission-type multifunctional anisotropic coding metasurfaces in millimeter-wave frequencies

    NASA Astrophysics Data System (ADS)

    Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo

    2017-10-01

    We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  9. Application of a Two-dimensional Unsteady Viscous Analysis Code to a Supersonic Throughflow Fan Stage

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1989-01-01

    The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.

  10. Analyzing Prosocial Content on T.V.

    ERIC Educational Resources Information Center

    Davidson, Emily S.; Neale, John M.

    To enhance knowledge of television content, a prosocial code was developed by watching a large number of potentially prosocial television programs and making notes on all the positive acts. The behaviors were classified into a workable number of categories. The prosocial code is largely verbal and contains seven categories which fall into two…

  11. Supporting Source Code Comprehension during Software Evolution and Maintenance

    ERIC Educational Resources Information Center

    Alhindawi, Nouh

    2013-01-01

    This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information…

  12. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  13. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    NASA Astrophysics Data System (ADS)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  14. Cracking Her Codes: Understanding Shared Technology Resources as Positioning Artifacts for Power and Status in CSCL Environments

    ERIC Educational Resources Information Center

    Simpson, Amber; Bannister, Nicole; Matthews, Gretchen

    2017-01-01

    There is a positive relationship between student participation in computer-supported collaborative learning (CSCL) environments and improved complex problem-solving strategies, increased learning gains, higher engagement in the thinking of their peers, and an enthusiastic disposition toward groupwork. However, student participation varies from…

  15. Discovering Genres of Online Discussion Threads via Text Mining

    ERIC Educational Resources Information Center

    Lin, Fu-Ren; Hsieh, Lu-Shih; Chuang, Fu-Tai

    2009-01-01

    As course management systems (CMS) gain popularity in facilitating teaching. A forum is a key component to facilitate the interactions among students and teachers. Content analysis is the most popular way to study a discussion forum. But content analysis is a human labor intensity process; for example, the coding process relies heavily on manual…

  16. Development of exploratory behavior in late preterm infants.

    PubMed

    Soares, Daniele de Almeida; von Hofsten, Claes; Tudella, Eloisa

    2012-12-01

    Exploratory behaviors of 9 late preterm infants and 10 full-term infants were evaluated longitudinally at 5, 6 and 7 months of age. Eight exploratory behaviors were coded. The preterm infants mouthed the object less and had delayed gains in Waving compared to the full-term infants. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Dealing with Conflicts on Knowledge in Tutorial Groups

    ERIC Educational Resources Information Center

    Aarnio, Matti; Lindblom-Ylanne, Sari; Nieminen, Juha; Pyorala, Eeva

    2013-01-01

    The aim of our study was to gain understanding of different types of conflicts on knowledge in the discussions of problem-based learning tutorial groups, and how such conflicts are dealt with. We examined first-year medical and dental students' (N = 33) conflicts on knowledge in four videotaped reporting phase tutorials. A coding scheme was…

  18. 26 CFR 1.168(i)-1 - General asset accounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... under paragraph (k) of this section. (b) Definitions. For purposes of this section, the following.... If a taxpayer makes the election under paragraph (k) of this section, assets that are subject to the... the Code that treat gain on a disposition as subject to section 1245 or 1250). (iii) Effect of...

  19. 26 CFR 1.168(i)-1 - General asset accounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... under paragraph (k) of this section. (b) Definitions. For purposes of this section, the following.... If a taxpayer makes the election under paragraph (k) of this section, assets that are subject to the... the Code that treat gain on a disposition as subject to section 1245 or 1250). (iii) Effect of...

  20. 78 FR 65042 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... obligations properly report income and gain on such obligations. The people reporting will be institutions... of Subchapter K for Producers of Natural Gas. Abstract: This regulation contains certain requirements... elect out of subchapter K of chapter 1 of the Internal Revenue Code. Under section 1.761-2(d)(5)(i), gas...

  1. The ICF and Postsurgery Occupational Therapy after Traumatic Hand Injury

    ERIC Educational Resources Information Center

    Fitinghoff, Helene; Lindqvist, Birgitta; Nygard, Louise; Ekholm, Jan; Schult, Marie-Louise

    2011-01-01

    Recent studies have examined the effectiveness of hand rehabilitation programmes and have linked the outcomes to the concept of ICF but not to specific ICF category codes. The objective of this study was to gain experience using ICF concepts to describe occupational therapy interventions during postsurgery hand rehabilitation, and to describe…

  2. Realized gains from block-plot coastal Douglas-fir trials in the northern Oregon Cascades

    Treesearch

    Terrence Z. Ye; Keith J.S. Jayawickrama; J. Bradley St. Clair

    2010-01-01

    Realized gains for coastal Douglas-fir (Pseudotsuga menziesii var. menziesii) were evaluated using data collected from 15-year-old trees from five field trials planted in large block plots in the northern Oregon Cascades. Three populations with different genetic levels (elite--high predicted gain; intermediate--moderate predicted gain; and an...

  3. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition.

    PubMed

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A

    2016-08-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  5. Electron Thermalization in the Solar Wind and Planetary Plasma Boundaries

    NASA Technical Reports Server (NTRS)

    Krauss-Varban, Dietmar

    1998-01-01

    The work carried out under this contract attempts a better understanding of whistler wave generation and associated scattering of electrons in the solar wind. This task is accomplished through simulations using a particle-in-cell code and a Vlasov code. In addition, the work is supported by the utilization of a linear kinetic dispersion solver. Previously, we have concentrated on gaining a better understanding of the linear mode properties, and have tested the simulation codes within a known parameter regime. We are now in a new phase in which we implement, execute, and analyze production simulations. This phase is projected to last over several reporting periods, with this being the second cycle. In addition, we have started to research to what extent the evolution of the pertinent instabilities is two-dimensional. We are also continuing our work on the visualization aspects of the simulation results, and on a code version that runs on single-user Alpha-processor based workstations.

  6. BBC users manual. [In LRLTRAN for CDC 7600 and STAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ltterst, R. F.; Sutcliffe, W. G.; Warshaw, S. I.

    1977-11-01

    BBC is a two-dimensional, multifluid Eulerian hydro-radiation code based on KRAKEN and some subsequent ideas. It was developed in the explosion group in T-Division as a basic two-dimensional code to which various types of physics can be added. For this reason BBC is a FORTRAN (LRLTRAN) code. In order to gain the 2-to-1 to 4-to-1 speed advantage of the STACKLIB software on the 7600's and to be able to execute at high speed on the STAR, the vector extensions of LRLTRAN (STARTRAN) are used throughout the code. Either cylindrical- or slab-type problems can be run on BBC. The grid ismore » bounded by a rectangular band of boundary zones. The interfaces between the regular and boundary zones can be selected to be either rigid or nonrigid. The setup for BBC problems is described in the KEG Manual and LEG Manual. The difference equations are described in BBC Hydrodynamics. Basic input and output for BBC are described.« less

  7. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  8. CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1978-01-01

    CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.

  9. Applications of multiple-constraint matrix updates to the optimal control of large structures

    NASA Technical Reports Server (NTRS)

    Smith, S. W.; Walcott, B. L.

    1992-01-01

    Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.

  10. Framing Innovation: The Role of Distributed Leadership in Gaining Acceptance of Large-Scale Technology Initiatives

    ERIC Educational Resources Information Center

    Turner, Henry J.

    2014-01-01

    This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…

  11. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  12. Association between gestational weight gain and perinatal outcomes in women with chronic hypertension.

    PubMed

    Yee, Lynn M; Caughey, Aaron B; Cheng, Yvonne W

    2017-09-01

    Gestational weight gain above or below the 2009 National Academy of Medicine guidelines has been associated with adverse maternal and neonatal outcomes. Although it has been well established that excess gestational weight gain is associated with the development of gestational hypertension and preeclampsia, the relationship between gestational weight gain and adverse perinatal outcomes among women with pregestational (chronic) hypertension is less clear. The objective of this study was to examine the relationship between gestational weight gain above and below National Academy of Medicine guidelines and perinatal outcomes in a large, population-based cohort of women with chronic hypertension. This is a population-based retrospective cohort study of women with chronic hypertension who had term, singleton, vertex births in the United States from 2012 through 2014. Prepregnancy body mass index was calculated using self-reported prepregnancy weight and height. Women were categorized into 4 groups based on gestational weight gain and prepregnancy body mass index: (1) weight gain less than, (2) weight gain within, (3) weight gain 1-19 lb in excess of, and (4) weight gain ≥20 lb in excess of the National Academy of Medicine guidelines. The χ 2 tests and multivariable logistic regression analysis were used for statistical comparisons. Stratified analyses by body mass index category were additionally performed. In this large birth cohort, 101,259 women met criteria for inclusion. Compared to hypertensive women who had gestational weight gain within guidelines, hypertensive women with weight gain ≥20 lb over National Academy of Medicine guidelines were more likely to have eclampsia (adjusted odds ratio, 1.93; 95% confidence interval, 1.54-2.42) and cesarean delivery (adjusted odds ratio, 1.60; 95% confidence interval, 1.50-1.70). Excess weight gain ≥20 lb over National Academy of Medicine guidelines was also associated with increased odds of 5-minute Apgar <7 (adjusted odds ratio, 1.29; 95% confidence interval, 1.13-1.47), neonatal intensive care unit admission (adjusted odds ratio, 1.23; 95% confidence interval, 1.14-1.33), and large-for-gestational-age neonates (adjusted odds ratio, 2.41; 95% confidence interval, 2.27-2.56) as well as decreased odds of small-for-gestational-age status (adjusted odds ratio, 0.52; 95% confidence interval, 0.46-0.58). Weight gain 1-19 lb over guidelines was associated with similar fetal growth outcomes although with a smaller effect size. In contrast, weight gain less than National Academy of Medicine guidelines was not associated with adverse maternal outcomes but was associated with increased odds of small for gestational age (adjusted odds ratio, 1.31; 95% confidence interval, 1.21-1.52) and decreased odds of large-for-gestational-age status (adjusted odds ratio, 0.86; 95% confidence interval, 0.81-0.92). Analysis of maternal and neonatal outcomes stratified by body mass index demonstrated similar findings. Women with chronic hypertension who gain less weight than National Academy of Medicine guidelines experience increased odds of small-for-gestational-age neonates, whereas excess weight gain ≥20 lb over National Academy of Medicine guidelines is associated with cesarean delivery, eclampsia, 5-minute Apgar <7, neonatal intensive care unit admission, and large-for-gestational-age neonates. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A universal preconditioner for simulating condensed phase materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less

  14. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  15. Low dose digital X-ray imaging with avalanche amorphous selenium

    NASA Astrophysics Data System (ADS)

    Scheuermann, James R.; Goldan, Amir H.; Tousignant, Olivier; Léveillé, Sébastien; Zhao, Wei

    2015-03-01

    Active Matrix Flat Panel Imagers (AMFPI) based on an array of thin film transistors (TFT) have become the dominant technology for digital x-ray imaging. In low dose applications, the performance of both direct and indirect conversion detectors are limited by the electronic noise associated with the TFT array. New concepts of direct and indirect detectors have been proposed using avalanche amorphous selenium (a-Se), referred to as high gain avalanche rushing photoconductor (HARP). The indirect detector utilizes a planar layer of HARP to detect light from an x-ray scintillator and amplify the photogenerated charge. The direct detector utilizes separate interaction (non-avalanche) and amplification (avalanche) regions within the a-Se to achieve depth-independent signal gain. Both detectors require the development of large area, solid state HARP. We have previously reported the first avalanche gain in a-Se with deposition techniques scalable to large area detectors. The goal of the present work is to demonstrate the feasibility of large area HARP fabrication in an a-Se deposition facility established for commercial large area AMFPI. We also examine the effect of alternative pixel electrode materials on avalanche gain. The results show that avalanche gain > 50 is achievable in the HARP layers developed in large area coaters, which is sufficient to achieve x-ray quantum noise limited performance down to a single x-ray photon per pixel. Both chromium (Cr) and indium tin oxide (ITO) have been successfully tested as pixel electrodes.

  16. Progress towards large gain-length products on the Li-like recombination scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeitoun, P.; Jamelot, G.; Carillon, A.

    1995-05-01

    Investigating possibilities of attaining large gain-length products on the recombination scheme using lithium-like ions, we have examined two approaches aimed at overcoming the problem of plasma non-uniformity susceptible to destroy gain by a number of processes. In the first approach we studied amplification on the transitions 5f-3d and 4f-3d in Li-like Al{sup 10+} plasma column produced by smoothing optics using lens arrays. Employing this device resulted in the gain holding up significantly longer than when no smoothing optics was used. Second, we have investigated numerically and experimentally the 5g-4f transition in Li-like S{sup 13+}, as the gain should be barelymore » affected by the plasma nonuniformities. Encouraging results were obtained and their various aspects are discussed.« less

  17. Array coding for large data memories

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.

    1982-01-01

    It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.

  18. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  19. Information theoretical assessment of digital imaging systems

    NASA Technical Reports Server (NTRS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-01-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  20. Information theoretical assessment of digital imaging systems

    NASA Astrophysics Data System (ADS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-10-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  1. Coherent state coding approaches the capacity of non-Gaussian bosonic channels

    NASA Astrophysics Data System (ADS)

    Huber, Stefan; König, Robert

    2018-05-01

    The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.

  2. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  3. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications.

    PubMed

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes.

  4. Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications

    PubMed Central

    Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan

    2010-01-01

    It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes. PMID:19952117

  5. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  6. Gain-Compensating Circuit For NDE and Ultrasonics

    NASA Technical Reports Server (NTRS)

    Kushnick, Peter W.

    1987-01-01

    High-frequency gain-compensating circuit designed for general use in nondestructive evaluation and ultrasonic measurements. Controls gain of ultrasonic receiver as function of time to aid in measuring attenuation of samples with high losses; for example, human skin and graphite/epoxy composites. Features high signal-to-noise ratio, large signal bandwidth and large dynamic range. Control bandwidth of 5 MHz ensures accuracy of control signal. Currently being used for retrieval of more information from ultrasonic signals sent through composite materials that have high losses, and to measure skin-burn depth in humans.

  7. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  8. Multiple Codes, Multiple Impressions: An Analysis of Doctor-Client Encounters in Nigeria

    ERIC Educational Resources Information Center

    Odebunmi, Akin

    2013-01-01

    Existing studies on doctor-client interactions have largely focused on monolingual encounters and the interactional effects and functions of the languages used in the communication between doctors and their clients. They have neither, to a large extent, examined the several codes employed in single encounters and their pragmatic roles nor given…

  9. Progress Towards Highly Efficient Windows for Zero—Energy Buildings

    NASA Astrophysics Data System (ADS)

    Selkowitz, Stephen

    2008-09-01

    Energy efficient windows could save 4 quads/year, with an additional 1 quad/year gain from daylighting in commercial buildings. This corresponds to 13% of energy used by US buildings and 5% of all energy used by the US. The technical potential is thus very large and the economic potential is slowly becoming a reality. This paper describes the progress in energy efficient windows that employ low-emissivity glazing, electrochromic switchable coatings and other novel materials. Dynamic systems are being developed that use sensors and controls to modulate daylighting and shading contributions in response to occupancy, comfort and energy needs. Improving the energy performance of windows involves physics in a variety of application: optics, heat transfer, materials science and applied engineering. Technical solutions must also be compatible with national policy, codes and standards, economics, business practice and investment, real and perceived risks, comfort, health, safety, productivity, amenities, and occupant preference and values. The challenge is to optimize energy performance by understanding and reinforcing the synergetic coupling between these many issues.

  10. Time-dependent multi-dimensional simulation studies of the electron output scheme for high power FELs

    NASA Astrophysics Data System (ADS)

    Hahn, S. J.; Fawley, W. M.; Kim, K. J.; Edighoffer, J. A.

    1994-12-01

    The authors examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. The authors have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.

  11. Numerical study of phase conjugation in stimulated Brillouin scattering from an optical waveguide

    NASA Astrophysics Data System (ADS)

    Lehmberg, R. H.

    1983-05-01

    Stimulated Brillouin scattering (SBS) in a multimode optical waveguide is examined, and the parameters that affect the wavefront conjugation fidelity are studied. The nonlinear propagation code is briefly described and the calculated quantities are defined. The parameter study in the low reflectivity limit is described, and the effects of pump depletion are considered. The waveguide produced significantly higher fidelities than the focused configuration, in agreement with several experimental studies. The light scattered back through the phase aberrator exhibited a farfield intenstiy profile closely matching that of the incident beam; however, the nearfield intensity exhibited large and rapid spatial inhomogeneities across the entire aberrator, even for conjugation fidelities as high as 98 percent. In the absence of pump depletion, the fidelity increased with average pump intensity for amplitude gains up to around e to the 10th and then decreased slowly and monotonically with higher intensity. For all cases, pump depletion significantly enhanced the fidelity of the wavefront conjugation by inhibiting the small-scale pulling effect.

  12. Endogenous Bioelectric Signaling Networks: Exploiting Voltage Gradients for Control of Growth and Form.

    PubMed

    Levin, Michael; Pezzulo, Giovanni; Finkelstein, Joshua M

    2017-06-21

    Living systems exhibit remarkable abilities to self-assemble, regenerate, and remodel complex shapes. How cellular networks construct and repair specific anatomical outcomes is an open question at the heart of the next-generation science of bioengineering. Developmental bioelectricity is an exciting emerging discipline that exploits endogenous bioelectric signaling among many cell types to regulate pattern formation. We provide a brief overview of this field, review recent data in which bioelectricity is used to control patterning in a range of model systems, and describe the molecular tools being used to probe the role of bioelectrics in the dynamic control of complex anatomy. We suggest that quantitative strategies recently developed to infer semantic content and information processing from ionic activity in the brain might provide important clues to cracking the bioelectric code. Gaining control of the mechanisms by which large-scale shape is regulated in vivo will drive transformative advances in bioengineering, regenerative medicine, and synthetic morphology, and could be used to therapeutically address birth defects, traumatic injury, and cancer.

  13. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  14. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    NASA Astrophysics Data System (ADS)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  15. WIC in Your Neighborhood: New Evidence on the Impacts of Geographic Access to Clinics

    PubMed Central

    Rossin-Slater, Maya

    2013-01-01

    A large body of evidence indicates that conditions in-utero and health at birth matter for individuals’ long-run outcomes, suggesting potential value in programs aimed at pregnant women and young children. This paper uses a novel identification strategy and data from birth and administrative records over 2005–2009 to provide causal estimates of the effects of geographic access to the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). My empirical approach uses within-ZIP-code variation in WIC clinic presence together with maternal fixed effects, and accounts for the potential endogeneity of mobility, gestational-age bias, and measurement error in gestation. I find that access to WIC increases food benefit take-up, pregnancy weight gain, birth weight, and the probability of breastfeeding initiation at the time of hospital discharge. The estimated effects are strongest for mothers with a high school education or less, who are most likely eligible for WIC services. PMID:24043906

  16. Mothers of Obese Children Use More Direct Imperatives to Restrict Eating.

    PubMed

    Pesch, Megan H; Miller, Alison L; Appugliese, Danielle P; Rosenblum, Katherine L; Lumeng, Julie C

    2018-04-01

    To examine the association of mother and child characteristics with use of direct imperatives to restrict eating. A total of 237 mother-child dyads (mean child age, 70.9 months) participated in a video-recorded, laboratory-standardized eating protocol with 2 large portions of cupcakes. Videos were reliably coded for counts of maternal direct imperatives to restrict children's eating. Anthropometrics were measured. Regression models tested the association of participant characteristics with counts of direct imperatives. Child obese weight status and maternal white non-Hispanic race/ethnicity were associated with greater levels of direct imperatives to restrict eating (p = .0001 and .0004, respectively). Mothers of obese children may be using more direct imperatives to restrict eating so as to achieve behavioral compliance to decrease their child's food intake. Future work should consider the effects direct imperatives have on children's short- and long-term eating behaviors and weight gain trajectories. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  17. Experiments, constitutive modeling and FE simulations of the impact behavior of Molybdenum

    NASA Astrophysics Data System (ADS)

    Kleiser, Geremy; Revil-Baudard, Benoit

    For polycrystalline high-purity molybdenum the feasibility of a Taylor test is questionable because the very large tensile stresses generated at impact would result in disintegration of the specimen. We report an experimental investigation and new model to account simultaneously for the experimentally observed anisotropy, tension-compression asymmetry and strain-rate sensitivity of this material. To ensure high-fidelity predictions, a fully-implicit algorithm was used for implementing the new model in the FE code ABAQUS. Based on model predictions, the impact velocity range was established for which specimens may be recovered. Taylor impact tests in this range (140-165 m/s) were successfully conducted for different specimen taken along the rolling direction (RD), the transverse direction and 45o to the RD. Comparison between the measured profiles of impact specimens and FE model predictions show excellent agreement. Furthermore, simulations were performed to gain understanding of the dynamic event: time evolution of the pressure, the extent of plastic deformation, distribution of plastic strain rates, and transition to quasi-stable deformation occurs.

  18. 2009 IOM guidelines for gestational weight gain: how well do they predict outcomes across ethnic groups?

    PubMed

    Khanolkar, Amal R; Hanley, Gillian E; Koupil, Ilona; Janssen, Patricia A

    2017-11-13

    To determine whether the Institute Of Medicine's (IOM) 2009 guidelines for weight-gain during pregnancy are predictive of maternal and infant outcomes in ethnic minority populations. We designed a population-based study using administrative data on 181,948 women who delivered live singleton births in Washington State between 2006-2008. We examined risks of gestational hypertension, preeclampsia/eclampsia, cesarean delivery, and extended hospital stay in White, Black, Native-American, East-Asian, Hispanic, South-Asian and Hawaiian/Pacific islander women according to whether they gained more or less weight during pregnancy than recommended by IOM guidelines. We also examined risks of neonatal outcomes including Apgar score <7 at 5 min, admission to NICU, requirement for ventilation, and a diagnosis of small or large for gestational age at birth. Gaining too much weight was associated with increased odds for gestational hypertension (adjusted OR (aOR) ranged between 1.53-2.22), preeclampsia/eclampsia (aOR 1.44-1.81), cesarean delivery (aOR 1.07-1.38) and extended hospital stay (aOR 1.06-1.28) in all ethnic groups. Gaining too little weight was associated with decreased odds for gestational hypertension and delivery by cesarean section in Whites, Blacks and Hispanics. Gaining less weight or more weight than recommended was associated with increased odds for small for gestational age and large for gestational age infants respectively, in all ethnic groups. Adherence to the 2009 IOM guidelines for weight gain during pregnancy reduces risk for various adverse maternal outcomes in all ethnic groups studied. However, the guidelines were less predictive of infant outcomes with the exception of small and large for gestational age. GWG: Gestational weight gain; IOM/NRC; Institute of Medicine and National Research Council; NICU: Neonatal intensive care need for ventilation; SGA: Small for gestational age; LGA: Large for gestational age; BERD: Birth Events Records Database; CHARS: Comprehensive Hospital Discharge Abstract Reporting System; ICD: International Classification of Disease; LMP: Last menstrual period; OR: Odds ratio.

  19. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  20. Technology Infusion of CodeSonar into the Space Network Ground Segment

    NASA Technical Reports Server (NTRS)

    Benson, Markland J.

    2009-01-01

    This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.

Top