Sample records for block truncation coding

  1. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  2. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  3. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  4. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  5. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  6. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  7. The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, Kenneth J.

    1993-01-01

    As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).

  8. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  9. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  10. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  11. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    PubMed

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  12. Identification of a Functionally Distinct Truncated BDNF mRNA Splice Variant and Protein in Trachemys scripta elegans

    PubMed Central

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein. PMID:23825634

  13. Distinguishing attack and second-preimage attack on encrypted message authentication codes (EMAC)

    NASA Astrophysics Data System (ADS)

    Ariwibowo, Sigit; Windarta, Susila

    2016-02-01

    In this paper we show that distinguisher on CBC-MAC can be applied to Encrypted Message Authentication Code (EMAC) scheme. EMAC scheme in general is vulnerable to distinguishing attack and second preimage attack. Distinguishing attack simulation on AES-EMAC using 225 message modifications, no collision have been found. According to second preimage attack simulation on AES-EMAC no collision found between EMAC value of S1 and S2, i.e. no second preimage found for messages that have been tested. Based on distinguishing attack simulation on truncated AES-EMAC we found collision in every message therefore we cannot distinguish truncated AES-EMAC with random function. Second-preimage attack is successfully performed on truncated AES-EMAC.

  14. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  15. Avoidance of truncated proteins from unintended ribosome binding sites within heterologous protein coding sequences.

    PubMed

    Whitaker, Weston R; Lee, Hanson; Arkin, Adam P; Dueber, John E

    2015-03-20

    Genetic sequences ported into non-native hosts for synthetic biology applications can gain unexpected properties. In this study, we explored sequences functioning as ribosome binding sites (RBSs) within protein coding DNA sequences (CDSs) that cause internal translation, resulting in truncated proteins. Genome-wide prediction of bacterial RBSs, based on biophysical calculations employed by the RBS calculator, suggests a selection against internal RBSs within CDSs in Escherichia coli, but not those in Saccharomyces cerevisiae. Based on these calculations, silent mutations aimed at removing internal RBSs can effectively reduce truncation products from internal translation. However, a solution for complete elimination of internal translation initiation is not always feasible due to constraints of available coding sequences. Fluorescence assays and Western blot analysis showed that in genes with internal RBSs, increasing the strength of the intended upstream RBS had little influence on the internal translation strength. Another strategy to minimize truncated products from an internal RBS is to increase the relative strength of the upstream RBS with a concomitant reduction in promoter strength to achieve the same protein expression level. Unfortunately, lower transcription levels result in increased noise at the single cell level due to stochasticity in gene expression. At the low expression regimes desired for many synthetic biology applications, this problem becomes particularly pronounced. We found that balancing promoter strengths and upstream RBS strengths to intermediate levels can achieve the target protein concentration while avoiding both excessive noise and truncated protein.

  16. 78 FR 15337 - IRS Truncated Taxpayer Identification Numbers; Hearing Cancellation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-11

    ... Service (IRS), Treasury. ACTION: Cancellation of a notice of public hearing on proposed rulemaking. SUMMARY: This document cancels a public hearing on proposed regulations under the Internal Revenue Code... IRS truncated taxpayer identification number, a TTIN. DATES: The public hearing, originally scheduled...

  17. Multilevel Concatenated Block Modulation Codes for the Frequency Non-selective Rayleigh Fading Channel

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun

    1996-01-01

    This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.

  18. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  19. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  20. Targeted mass spectrometric analysis of N-terminally truncated isoforms generated via alternative translation initiation.

    PubMed

    Kobayashi, Ryuji; Patenia, Rebecca; Ashizawa, Satoshi; Vykoukal, Jody

    2009-07-21

    Alternative translation initiation is a mechanism whereby functionally altered proteins are produced from a single mRNA. Internal initiation of translation generates N-terminally truncated protein isoforms, but such isoforms observed in immunoblot analysis are often overlooked or dismissed as degradation products. We identified an N-terminally truncated isoform of human Dok-1 with N-terminal acetylation as seen in the wild-type. This Dok-1 isoform exhibited distinct perinuclear localization whereas the wild-type protein was distributed throughout the cytoplasm. Targeted analysis of blocked N-terminal peptides provides rapid identification of protein isoforms and could be widely applied for the general evaluation of perplexing immunoblot bands.

  1. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  2. Developing Chemistry and Kinetic Modeling Tools for Low-Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Beckwith, Kris; Davidson, Bradley; Kruger, Scott; Pankin, Alexei; Roark, Christine; Stoltz, Peter

    2015-09-01

    We discuss the use of proper orthogonal decomposition (POD) methods in VSim, a FDTD plasma simulation code capable of both PIC/MCC and fluid modeling. POD methods efficiently generate smooth representations of noisy self-consistent or test-particle PIC data, and are thus advantageous in computing macroscopic fluid quantities from large PIC datasets (e.g. for particle-based closure computations) and in constructing optimal visual representations of the underlying physics. They may also confer performance advantages for massively parallel simulations, due to the significant reduction in dataset sizes conferred by truncated singular-value decompositions of the PIC data. We also demonstrate how complex LTP chemistry scenarios can be modeled in VSim via an interface with MUNCHKIN, a developing standalone python/C++/SQL code that identifies reaction paths for given input species, solves 1D rate equations for the time-dependent chemical evolution of the system, and generates corresponding VSim input blocks with appropriate cross-sections/reaction rates. MUNCHKIN also computes reaction rates from user-specified distribution functions, and conducts principal path analyses to reduce the number of simulated chemical reactions. Supported by U.S. Department of Energy SBIR program, Award DE-SC0009501.

  3. Rare, protein-truncating variants in ATM, CHEK2 and PALB2, but not XRCC2, are associated with increased breast cancer risks

    PubMed Central

    Decker, Brennan; Allen, Jamie; Luccarini, Craig; Pooley, Karen A; Shah, Mitul; Bolla, Manjeet K; Wang, Qin; Ahmed, Shahana; Baynes, Caroline; Conroy, Don M; Brown, Judith; Luben, Robert; Ostrander, Elaine A; Pharoah, Paul DP; Dunning, Alison M; Easton, Douglas F

    2017-01-01

    Background Breast cancer (BC) is the most common malignancy in women and has a major heritable component. The risks associated with most rare susceptibility variants are not well estimated. To better characterise the contribution of variants in ATM, CHEK2, PALB2 and XRCC2, we sequenced their coding regions in 13 087 BC cases and 5488 controls from East Anglia, UK. Methods Gene coding regions were enriched via PCR, sequenced, variant called and filtered for quality. ORs for BC risk were estimated separately for carriers of truncating variants and of rare missense variants, which were further subdivided by functional domain and pathogenicity as predicted by four in silico algorithms. Results Truncating variants in PALB2 (OR=4.69, 95% CI 2.27 to 9.68), ATM (OR=3.26; 95% CI 1.82 to 6.46) and CHEK2 (OR=3.11; 95% CI 2.15 to 4.69), but not XRCC2 (OR=0.94; 95% CI 0.26 to 4.19) were associated with increased BC risk. Truncating variants in ATM and CHEK2 were more strongly associated with risk of oestrogen receptor (ER)-positive than ER-negative disease, while those in PALB2 were associated with similar risks for both subtypes. There was also some evidence that missense variants in ATM, CHEK2 and PALB2 may contribute to BC risk, but larger studies are necessary to quantify the magnitude of this effect. Conclusions Truncating variants in PALB2 are associated with a higher risk of BC than those in ATM or CHEK2. A substantial risk of BC due to truncating XRCC2 variants can be excluded. PMID:28779002

  4. Multi-level bandwidth efficient block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1989-01-01

    The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.

  5. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  6. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  7. Tailor-made dimensions of diblock copolymer truncated micelles on a solid by UV irradiation.

    PubMed

    Liou, Jiun-You; Sun, Ya-Sen

    2015-09-28

    We investigated the structural evolution of truncated micelles in ultrathin films of polystyrene-block-poly(2-vinylpyridine), PS-b-P2VP, of monolayer thickness on bare silicon substrates (SiOx/Si) upon UV irradiation in air- (UVIA) and nitrogen-rich (UVIN) environments. The structural evolution of micelles upon UV irradiation was monitored using GISAXS measurements in situ, while the surface morphology was probed using atomic force microscopy ex situ and the chemical composition using X-ray photoelectron spectroscopy (XPS). This work provides clear evidence for the interpretation of the relationship between the structural evolution and photochemical reactions in PS-b-P2VP truncated micelles upon UVIA and UVIN. Under UVIA treatment, photolysis and cross-linking reactions coexisted within the micelles; photolysis occurred mainly at the top of the micelles, whereas cross-linking occurred preferentially at the bottom. The shape and size of UVIA-treated truncated micelles were controlled predominantly by oxidative photolysis reactions, which depended on the concentration gradient of free radicals and oxygen along the micelle height. Because of an interplay between photolysis and photo-crosslinking, the scattering length densities (SLD) of PS and P2VP remained constant. In contrast, UVIN treatments enhanced the contrast in SLD between the PS shell and the P2VP core as cross-linking dominated over photolysis in the presence of nitrogen. The enhancement of the SLD contrast was due to the various degrees of cross-linking under UVIN for the PS and P2VP blocks.

  8. Timing of the Cenozoic basins of Southern Mexico and its relationship with the Pacific truncation process: Subduction erosion or detachment of the Chortís block

    NASA Astrophysics Data System (ADS)

    Silva-Romo, Gilberto; Mendoza-Rosales, Claudia Cristina; Campos-Madrigal, Emiliano; Hernández-Marmolejo, Yoalli Bianii; de la Rosa-Mora, Orestes Antonio; de la Torre-González, Alam Israel; Bonifacio-Serralde, Carlos; López-García, Nallely; Nápoles-Valenzuela, Juan Ivan

    2018-04-01

    In the central sector of the Sierra Madre del Sur in Southern Mexico, between approximately 36 and 16 Ma ago and in the west to east direction, a diachronic process of the formation of ∼north-south trending fault-bounded basins occurred. No tectono-sedimentary event in the period between 25 and 20 Ma is recognized in the study region. A period during which subduction erosion truncated the continental crust of southern Mexico has been proposed. The chronology, geometry and style of the formation of the Eocene Miocene fault-bounded basins are more congruent with crustal truncation by the detachment of the Chortís block, thus bringing into question the crustal truncation hypothesis of the Southern Mexico margin. Between Taxco and Tehuacán, using seven new Laser Ablation- Inductively-coupled plasma mass spectrometry (LA-ICP-MS) U-Pb ages in magmatic zircons, we refine the stratigraphy of the Tepenene, Tehuitzingo, Atzumba and Tepelmeme basins. The analyzed basins present similar tectono-sedimentary evolutions as follows: Stage 1, depocenter formation and filling by clastic rocks accumulated as alluvial fans and Stage 2, lacustrine sedimentation characterized by calcareous and/or evaporite beds. Based on our results, we propose the following hypothesis: in Southern Mexico, during Eocene-Miocene times, the diachronic formation of fault-bounded basins with general north-south trend occurred within the framework of the convergence between the plates of North and South America, and once the Chortís block had slipped towards the east, the basins formed in the cortical crust were recently left behind. On the other hand, the beginning of the basins' formation process related to left strike slip faults during Eocene-Oligocene times can be associated with the thermomechanical maturation cortical process that caused the brittle/ductile transition level in the continental crust to shallow.

  9. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2008-09-01

    Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and

  10. A simplification of the fractional Hartley transform applied to image security system in phase

    NASA Astrophysics Data System (ADS)

    Jimenez, Carlos J.; Vilardy, Juan M.; Perez, Ronal

    2017-01-01

    In this work we develop a new encryption system for encoded image in phase using the fractional Hartley transform (FrHT), truncation operations and random phase masks (RPMs). We introduce a simplification of the FrHT with the purpose of computing this transform in an efficient and fast way. The security of the encryption system is increased by using nonlinear operations, such as the phase encoding and the truncation operations. The image to encrypt (original image) is encoded in phase and the truncation operations applied in the encryption-decryption system are the amplitude and phase truncations. The encrypted image is protected by six keys, which are the two fractional orders of the FrHTs, the two RPMs and the two pseudorandom code images generated by the amplitude and phase truncation operations. All these keys have to be correct for a proper recovery of the original image in the decryption system. We present digital results that confirm our approach.

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  12. Selective object encryption for privacy protection

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos

    2009-05-01

    This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.

  13. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  14. A truncated, activin-induced Smad3 isoform acts as a transcriptional repressor of FSHβ expression in mouse pituitary

    PubMed Central

    Kim, So-Youn; Zhu, Jie; Woodruff, Teresa K.

    2011-01-01

    The receptor-regulated protein Smad3 is key player in the signaling cascade stimulated by the binding of activin to its cell surface receptor. Upon phosphorylation, Smad3 forms a heterocomplex with Smad2 and Smad4, translocates to the nucleus and acts as a transcriptional co-activator. We have identified a unique isoform of Smad3 that is expressed in mature pituitary gonadotropes. 5' RACE revealed that this truncated Smad3 isoform is transcribed from an ATG site within exon 4 and consists of 7 exons encoding half of the linker region and the MH2 region. In pituitary cells, the truncated Smad3 isoform was phosphorylated upon activin treatment, in a manner that was temporally distinct from the phosphorylation of full-length Smad3. Activin-induced phosphorylation of Smad3 and the truncated Smad3 isoform was blocked by both follistatin and siRNA-mediated knockdown of Smad3. The truncated Smad3 isoform antagonized Smad3-mediated, activin-responsive promoter activity. We propose that the pituitary gonadotrope contains an ultra-short, activin-responsive feedback loop utilizing two different isoforms of Smad3, one which acts as an agonist (Smad3) and another that acts as an intracrine antagonist (truncated Smad3 isoform) to regulate FSHβ production. PMID:21664424

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  16. Using Wavelet Bases to Separate Scales in Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Michlin, Tracie L.

    This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.

  17. Loss of Topoisomerase I leads to R-loop-mediated transcriptional blocks during ribosomal RNA synthesis

    PubMed Central

    El Hage, Aziz; French, Sarah L.; Beyer, Ann L.; Tollervey, David

    2010-01-01

    Pre-rRNA transcription by RNA Polymerase I (Pol I) is very robust on active rDNA repeats. Loss of yeast Topoisomerase I (Top1) generated truncated pre-rRNA fragments, which were stabilized in strains lacking TRAMP (Trf4/Trf5–Air1/Air2–Mtr4 polyadenylation complexes) or exosome degradation activities. Loss of both Top1 and Top2 blocked pre-rRNA synthesis, with pre-rRNAs truncated predominately in the 18S 5′ region. Positive supercoils in front of Pol I are predicted to slow elongation, while rDNA opening in its wake might cause R-loop formation. Chromatin immunoprecipitation analysis showed substantial levels of RNA/DNA hybrids in the wild type, particularly over the 18S 5′ region. The absence of RNase H1 and H2 in cells depleted of Top1 increased the accumulation of RNA/DNA hybrids and reduced pre-rRNA truncation and pre-rRNA synthesis. Hybrid accumulation over the rDNA was greatly exacerbated when Top1, Top2, and RNase H were all absent. Electron microscopy (EM) analysis revealed Pol I pileups in the wild type, particularly over the 18S. Pileups were longer and more frequent in the absence of Top1, and their frequency was exacerbated when RNase H activity was also lacking. We conclude that the loss of Top1 enhances inherent R-loop formation, particularly over the 5′ region of the rDNA, imposing persistent transcription blocks when RNase H is limiting. PMID:20634320

  18. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  19. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  20. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

    PubMed

    Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

    2014-06-20

    We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

  1. Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.

    PubMed

    Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin

    2017-08-29

    This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

  2. Rare, protein-truncating variants in ATM, CHEK2 and PALB2, but not XRCC2, are associated with increased breast cancer risks.

    PubMed

    Decker, Brennan; Allen, Jamie; Luccarini, Craig; Pooley, Karen A; Shah, Mitul; Bolla, Manjeet K; Wang, Qin; Ahmed, Shahana; Baynes, Caroline; Conroy, Don M; Brown, Judith; Luben, Robert; Ostrander, Elaine A; Pharoah, Paul Dp; Dunning, Alison M; Easton, Douglas F

    2017-11-01

    Breast cancer (BC) is the most common malignancy in women and has a major heritable component. The risks associated with most rare susceptibility variants are not well estimated. To better characterise the contribution of variants in ATM , CHEK2 , PALB2 and XRCC2 , we sequenced their coding regions in 13 087 BC cases and 5488 controls from East Anglia, UK. Gene coding regions were enriched via PCR, sequenced, variant called and filtered for quality. ORs for BC risk were estimated separately for carriers of truncating variants and of rare missense variants, which were further subdivided by functional domain and pathogenicity as predicted by four in silico algorithms. Truncating variants in PALB2 (OR=4.69, 95% CI 2.27 to 9.68), ATM (OR=3.26; 95% CI 1.82 to 6.46) and CHEK2 (OR=3.11; 95% CI 2.15 to 4.69), but not XRCC2 (OR=0.94; 95% CI 0.26 to 4.19) were associated with increased BC risk. Truncating variants in ATM and CHEK2 were more strongly associated with risk of oestrogen receptor (ER)-positive than ER-negative disease, while those in PALB2 were associated with similar risks for both subtypes. There was also some evidence that missense variants in ATM , CHEK2 and PALB2 may contribute to BC risk, but larger studies are necessary to quantify the magnitude of this effect. Truncating variants in PALB2 are associated with a higher risk of BC than those in ATM or CHEK2 . A substantial risk of BC due to truncating XRCC2 variants can be excluded. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Neutropenia-associated ELANE mutations disrupting translation initiation produce novel neutrophil elastase isoforms

    PubMed Central

    Tidwell, Timothy; Wechsler, Jeremy; Nayak, Ramesh C.; Trump, Lisa; Salipante, Stephen J.; Cheng, Jerry C.; Donadieu, Jean; Glaubach, Taly; Corey, Seth J.; Grimes, H. Leighton; Lutzko, Carolyn; Cancelas, Jose A.

    2014-01-01

    Hereditary neutropenia is usually caused by heterozygous germline mutations in the ELANE gene encoding neutrophil elastase (NE). How mutations cause disease remains uncertain, but two hypotheses have been proposed. In one, ELANE mutations lead to mislocalization of NE. In the other, ELANE mutations disturb protein folding, inducing an unfolded protein response in the endoplasmic reticulum (ER). In this study, we describe new types of mutations that disrupt the translational start site. At first glance, they should block translation and are incompatible with either the mislocalization or misfolding hypotheses, which require mutant protein for pathogenicity. We find that start-site mutations, instead, force translation from downstream in-frame initiation codons, yielding amino-terminally truncated isoforms lacking ER-localizing (pre) and zymogen-maintaining (pro) sequences, yet retain essential catalytic residues. Patient-derived induced pluripotent stem cells recapitulate hematopoietic and molecular phenotypes. Expression of the amino-terminally deleted isoforms in vitro reduces myeloid cell clonogenic capacity. We define an internal ribosome entry site (IRES) within ELANE and demonstrate that adjacent mutations modulate IRES activity, independently of protein-coding sequence alterations. Some ELANE mutations, therefore, appear to cause neutropenia via the production of amino-terminally deleted NE isoforms rather than by altering the coding sequence of the full-length protein. PMID:24184683

  4. Gene inactivation in the plant pathogen Glomerella cingulata: three strategies for the disruption of the pectin lyase gene pnlA.

    PubMed

    Bowen, J K; Templeton, M D; Sharrock, K R; Crowhurst, R N; Rikkerink, E H

    1995-01-20

    The feasibility of performing routine transformation-mediated mutagenesis in Glomerella cingulata was analysed by adopting three one-step gene disruption strategies targeted at the pectin lyase gene pnlA. The efficiencies of disruption following transformation with gene replacement- or gene truncation-disruption vectors were compared. To effect replacement-disruption, G. cingulata was transformed with a vector carrying DNA from the pnlA locus in which the majority of the coding sequence had been replaced by the gene for hygromycin B resistance. Two of the five transformants investigated contained an inactivated pnlA gene (pnlA-); both also contained ectopically integrated vector sequences. The efficacy of gene disruption by transformation with two gene truncation-disruption vectors was also assessed. Both vectors carried at 5' and 3' truncated copy of the pnlA coding sequence, adjacent to the gene for hygromycin B resistance. The promoter sequences controlling the selectable marker differed in the two vectors. In one vector the homologous G. cingulata gpdA promoter controlled hygromycin B phosphotransferase expression (homologous truncation vector), whereas in the second vector promoter elements were from the Aspergillus nidulans gpdA gene (heterologous truncation vector). Following transformation with the homologous truncation vector, nine transformants were analysed by Southern hybridisation; no transformants contained a disrupted pnlA gene. Of nineteen heterologous truncation vector transformants, three contained a disrupted pnlA gene; Southern analysis revealed single integrations of vector sequence at pnlA in two of these transformants. pnlA mRNA was not detected by Northern hybridisation in pnlA- transformants. pnlA- transformants failed to produce a PNLA protein with a pI identical to one normally detected in wild-type isolates by silver and activity staining of isoelectric focussing gels. Pathogenesis on Capsicum and apple was unaffected by disruption of the pnlA gene, indicating that the corresponding gene product, PNLA, is not essential for pathogenicity. Gene disruption is a feasible method for selectively mutating defined loci in G. cingulata for functional analysis of the corresponding gene products.

  5. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  6. 16 CFR 602.1 - Effective dates.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... identification of possible instances of identity theft; (iii) Section 115, concerning truncation of the social... theft victims; (v) Section 152, concerning blocking of information resulting from identity theft; (vi) Section 153, concerning the coordination of identity theft complaint investigations; (vii) Section 154...

  7. A truncated, activin-induced Smad3 isoform acts as a transcriptional repressor of FSHβ expression in mouse pituitary.

    PubMed

    Kim, So-Youn; Zhu, Jie; Woodruff, Teresa K

    2011-08-06

    The receptor-regulated protein Smad3 is key player in the signaling cascade stimulated by the binding of activin to its cell surface receptor. Upon phosphorylation, Smad3 forms a heterocomplex with Smad2 and Smad4, translocates to the nucleus and acts as a transcriptional co-activator. We have identified a unique isoform of Smad3 that is expressed in mature pituitary gonadotropes. 5' RACE revealed that this truncated Smad3 isoform is transcribed from an ATG site within exon 4 and consists of 7 exons encoding half of the linker region and the MH2 region. In pituitary cells, the truncated Smad3 isoform was phosphorylated upon activin treatment, in a manner that was temporally distinct from the phosphorylation of full-length Smad3. Activin-induced phosphorylation of Smad3 and the truncated Smad3 isoform was blocked by both follistatin and siRNA-mediated knockdown of Smad3. The truncated Smad3 isoform antagonized Smad3-mediated, activin-responsive promoter activity. We propose that the pituitary gonadotrope contains an ultra-short, activin-responsive feedback loop utilizing two different isoforms of Smad3, one which acts as an agonist (Smad3) and another that acts as an intracrine antagonist (truncated Smad3 isoform) to regulate FSHβ production. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Surface code implementation of block code state distillation.

    PubMed

    Fowler, Austin G; Devitt, Simon J; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.

  9. Surface code implementation of block code state distillation

    PubMed Central

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  10. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  11. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on

  12. Expression and characterization of truncated human heme oxygenase (hHO-1) and a fusion protein of hHO-1 with human cytochrome P450 reductase.

    PubMed

    Wilks, A; Black, S M; Miller, W L; Ortiz de Montellano, P R

    1995-04-04

    A human heme oxygenase (hHO-1) gene without the sequence coding for the last 23 amino acids has been expressed in Escherichia coli behind the pho A promoter. The truncated enzyme is obtained in high yields as a soluble, catalytically-active protein, making it available for the first time for detailed mechanistic studies. The purified, truncated hHO-1/heme complex is spectroscopically indistinguishable from that of the rat enzyme and converts heme to biliverdin when reconstituted with rat liver cytochrome P450 reductase. A self-sufficient heme oxygenase system has been obtained by fusing the truncated hHO-1 gene to the gene for human cytochrome P450 reductase without the sequence coding for the 20 amino acid membrane binding domain. Expression of the fusion protein in pCWori+ yields a protein that only requires NADPH for catalytic turnover. The failure of exogenous cytochrome P450 reductase to stimulate turnover and the insensitivity of the catalytic rate toward changes in ionic strength establish that electrons are transferred intramolecularly between the reductase and heme oxygenase domains of the fusion protein. The Vmax for the fusion protein is 2.5 times higher than that for the reconstituted system. Therefore, either the covalent tether does not interfere with normal docking and electron transfer between the flavin and heme domains or alternative but equally efficient electron transfer pathways are available that do not require specific docking.

  13. Knowledge and Processes in Design

    DTIC Science & Technology

    1992-09-03

    Orqanization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter approoriate price Block 8. Performing Organization Report code...NTIS on/y). Number. Enter the unique alphanumerc report number(s) assigned by the organization periorming the report. Blocks 17.-19...statement codings were then organized into larger control-flow structures centered around design components called modules. The general assumption was

  14. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  15. Prevalence of E/A wave fusion and A wave truncation in DDD pacemaker patients with complete AV block under nominal AV intervals.

    PubMed

    Poller, Wolfram C; Dreger, Henryk; Schwerg, Marius; Melzer, Christoph

    2015-01-01

    Optimization of the AV-interval (AVI) in DDD pacemakers improves cardiac hemodynamics and reduces pacemaker syndromes. Manual optimization is typically not performed in clinical routine. In the present study we analyze the prevalence of E/A wave fusion and A wave truncation under resting conditions in 160 patients with complete AV block (AVB) under the pre-programmed AVI. We manually optimized sub-optimal AVI. We analyzed 160 pacemaker patients with complete AVB, both in sinus rhythm (AV-sense; n = 129) and under atrial pacing (AV-pace; n = 31). Using Doppler analyses of the transmitral inflow we classified the nominal AVI as: a) normal, b) too long (E/A wave fusion) or c) too short (A wave truncation). In patients with a sub-optimal AVI, we performed manual optimization according to the recommendations of the American Society of Echocardiography. All AVB patients with atrial pacing exhibited a normal transmitral inflow under the nominal AV-pace intervals (100%). In contrast, 25 AVB patients in sinus rhythm showed E/A wave fusion under the pre-programmed AV-sense intervals (19.4%; 95% confidence interval (CI): 12.6-26.2%). A wave truncations were not observed in any patient. All patients with a complete E/A wave fusion achieved a normal transmitral inflow after AV-sense interval reduction (mean optimized AVI: 79.4 ± 13.6 ms). Given the rate of 19.4% (CI 12.6-26.2%) of patients with a too long nominal AV-sense interval, automatic algorithms may prove useful in improving cardiac hemodynamics, especially in the subgroup of atrially triggered pacemaker patients with AV node diseases.

  16. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  17. A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.

    1998-01-01

    Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.

  18. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  19. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  20. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.

  1. Phase behavior of a family of truncated hard cubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gantapara, Anjan P., E-mail: A.P.Gantapara@uu.nl; Dijkstra, Marjolein, E-mail: M.Dijkstra1@uu.nl; Graaf, Joost de

    2015-02-07

    In continuation of our work in Gantapara et al., [Phys. Rev. Lett. 111, 015501 (2013)], we investigate here the thermodynamic phase behavior of a family of truncated hard cubes, for which the shape evolves smoothly from a cube via a cuboctahedron to an octahedron. We used Monte Carlo simulations and free-energy calculations to establish the full phase diagram. This phase diagram exhibits a remarkable richness in crystal and mesophase structures, depending sensitively on the precise particle shape. In addition, we examined in detail the nature of the plastic crystal (rotator) phases that appear for intermediate densities and levels of truncation.more » Our results allow us to probe the relation between phase behavior and building-block shape and to further the understanding of rotator phases. Furthermore, the phase diagram presented here should prove instrumental for guiding future experimental studies on similarly shaped nanoparticles and the creation of new materials.« less

  2. Antigen-capture blocking enzyme-linked immunosorbent assay based on a baculovirus recombinant antigen to differentiate Transmissible gastroenteritis virus from Porcine respiratory coronavirus antibodies.

    PubMed

    López, Lissett; Venteo, Angel; García, Marga; Camuñas, Ana; Ranz, Ana; García, Julia; Sarraseca, Javier; Anaya, Carmen; Rueda, Paloma

    2009-09-01

    A new commercially available antigen-capture, blocking enzyme-linked immunosorbent assay (antigen-capture b-ELISA), based on baculovirus truncated-S recombinant protein of Transmissible gastroenteritis virus (TGEV) and 3 specific monoclonal antibodies, was developed and evaluated by examining a panel of 453 positive Porcine respiratory coronavirus (PRCoV), 31 positive TGEV, and 126 negative field sera by using another commercially available differential coronavirus b-ELISA as the reference technique to differentiate TGEV- from PRCoV-induced antibodies. The recombinant S protein-based ELISA appeared to be 100% sensitive for TGEV and PRCoV detection and highly specific for TGEV and PRCoV detection (100% and 92.06%, respectively), when qualitative results (positive or negative) were compared with those of the reference technique. In variability experiments, the ELISA gave consistent results when the same serum was evaluated on different wells and different plates. These results indicated that truncated recombinant S protein is a suitable alternative to the complete virus as antigen in ELISA assays. The use of recombinant S protein as antigen offers great advantages because it is an easy-to-produce, easy-to-standardize, noninfectious antigen that does not require further purification or concentration. Those advantages represent an important improvement for antigen preparation, in comparison with other assays in which an inactivated virus from mammalian cell cultures is used.

  3. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  4. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  5. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  6. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  7. Overview of the Space Launch System Ascent Aeroacoustic Environment Test Program

    NASA Technical Reports Server (NTRS)

    Herron, Andrew J.; Crosby, William A.; Reed, Darren K.

    2016-01-01

    Characterization of accurate flight vehicle unsteady aerodynamics is critical for component and secondary structure vibroacoustic design. The Aerosciences Branch at the National Aeronautics and Space Administration (NASA) Marshall Space Flight Center has conducted a test at the NASA Ames Research Center (ARC) Unitary Plan Wind Tunnels (UPWT) to determine such ascent aeroacoustic environments for the Space Launch System (SLS). Surface static pressure measurements were also collected to aid in determination of local environments for venting, CFD substantiation, and calibration of the flush air data system located on the launch abort system. Additionally, this test supported a NASA Engineering and Safety Center study of alternate booster nose caps. Testing occurred during two test campaigns: August - September 2013 and December 2013 - January 2014. Four primary model configurations were tested for ascent aeroacoustic environment definition. The SLS Block 1 vehicle was represented by a 2.5% full stack model and a 4% truncated model. Preliminary Block 1B payload and manned configurations were also tested, using 2.5% full stack and 4% truncated models respectively. This test utilized the 11 x 11 foot transonic and 9 x 7 foot supersonic tunnel sections at the ARC UPWT to collect data from Mach 0.7 through 2.5 at various total angles of attack. SLS Block 1 design environments were developed primarily using these data. SLS Block 1B preliminary environments have also been prepared using these data. This paper discusses the test and analysis methodology utilized, with a focus on the unsteady data collection and processing.

  8. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  9. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  10. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  11. Program structure-based blocking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2017-09-26

    Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.

  12. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  13. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  14. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  15. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  16. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  17. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  18. A truncated spherical shell model for nuclear collective excitations: Applications to the odd-mass systems, neutron-proton systems, and other topics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hua.

    1989-01-01

    One of the most elusive quantum system in nature is the nucleus, which is a strongly interacting many body system. In the hadronic (a la neutrons and protons) phase, the primary concern of this thesis, the nucleus' single particle excitations are intertwined with their various collective excitations. Although the underpinning of the nucleus is the spherical shell model, it is rendered powerless without a severe, but intelligent truncation of the infinite Hilbert space. The recently proposed Fermion Dynamical Symmetry Model (FDSM) is precisely such a truncation scheme and in which a symmetry-dictated truncation scheme is introduced in nuclear physics formore » the first time. In this thesis, extensions and explorations of the FDSM are made to specifically study the odd mass (where the most intricate mixing of the single particle and the collective excitations are observed) and the neutron-proton systems. In particular, the author finds that the previously successful phenomenological particle-rotor-model of the Copenhagen school can now be well understood microscopically via the FDSM. Furthermore, the well known Coriolis attenuation and variable moment of inertia effects are naturally understood from the model as well. A computer code FDUO was written by one of us to study, for the first time, the numerical implications of the FDSM. Several collective modes were found even when the system does not admit a group chain description. In addition, the code is most suitable to study the connection between level statistical behavior (a at Gaussian Orthogonal Ensemble) and dynamical symmetry. It is found that there exist critical region of the interaction parameter space were the system behaves chaotically. This information is certainly crucial to understanding quantum chaotic behavior.« less

  19. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  1. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  2. Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile

    NASA Astrophysics Data System (ADS)

    Halverson, Thomas; Poirier, Bill

    2015-03-01

    'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.

  3. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  4. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  5. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  6. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  7. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  8. Geometrical-optics code for computing the optical properties of large dielectric spheres.

    PubMed

    Zhou, Xiaobing; Li, Shusun; Stamnes, Knut

    2003-07-20

    Absorption of electromagnetic radiation by absorptive dielectric spheres such as snow grains in the near-infrared part of the solar spectrum cannot be neglected when radiative properties of snow are computed. Thus a new, to our knowledge, geometrical-optics code is developed to compute scattering and absorption cross sections of large dielectric particles of arbitrary complex refractive index. The number of internal reflections and transmissions are truncated on the basis of the ratio of the irradiance incident at the nth interface to the irradiance incident at the first interface for a specific optical ray. Thus the truncation number is a function of the angle of incidence. Phase functions for both near- and far-field absorption and scattering of electromagnetic radiation are calculated directly at any desired scattering angle by using a hybrid algorithm based on the bisection and Newton-Raphson methods. With these methods a large sphere's absorption and scattering properties of light can be calculated for any wavelength from the ultraviolet to the microwave regions. Assuming that large snow meltclusters (1-cm order), observed ubiquitously in the snow cover during summer, can be characterized as spheres, one may compute absorption and scattering efficiencies and the scattering phase function on the basis of this geometrical-optics method. A geometrical-optics method for sphere (GOMsphere) code is developed and tested against Wiscombe's Mie scattering code (MIE0) and a Monte Carlo code for a range of size parameters. GOMsphere can be combined with MIE0 to calculate the single-scattering properties of dielectric spheres of any size.

  9. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    NASA Astrophysics Data System (ADS)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  10. De novo truncating variants in the AHDC1 gene encoding the AT-hook DNA-binding motif-containing protein 1 are associated with intellectual disability and developmental delay.

    PubMed

    Yang, Hui; Douglas, Ganka; Monaghan, Kristin G; Retterer, Kyle; Cho, Megan T; Escobar, Luis F; Tucker, Megan E; Stoler, Joan; Rodan, Lance H; Stein, Diane; Marks, Warren; Enns, Gregory M; Platt, Julia; Cox, Rachel; Wheeler, Patricia G; Crain, Carrie; Calhoun, Amy; Tryon, Rebecca; Richard, Gabriele; Vitazka, Patrik; Chung, Wendy K

    2015-10-01

    Whole-exome sequencing (WES) represents a significant breakthrough in clinical genetics, and identifies a genetic etiology in up to 30% of cases of intellectual disability (ID). Using WES, we identified seven unrelated patients with a similar clinical phenotype of severe intellectual disability or neurodevelopmental delay who were all heterozygous for de novo truncating variants in the AT-hook DNA-binding motif-containing protein 1 (AHDC1). The patients were all minimally verbal or nonverbal and had variable neurological problems including spastic quadriplegia, ataxia, nystagmus, seizures, autism, and self-injurious behaviors. Additional common clinical features include dysmorphic facial features and feeding difficulties associated with failure to thrive and short stature. The AHDC1 gene has only one coding exon, and the protein contains conserved regions including AT-hook motifs and a PDZ binding domain. We postulate that all seven variants detected in these patients result in a truncated protein missing critical functional domains, disrupting interactions with other proteins important for brain development. Our study demonstrates that truncating variants in AHDC1 are associated with ID and are primarily associated with a neurodevelopmental phenotype.

  11. Experimental Investigations on Axially and Eccentrically Loaded Masonry Walls

    NASA Astrophysics Data System (ADS)

    Keshava, Mangala; Raghunath, Seshagiri Rao

    2017-12-01

    In India, un-reinforced masonry walls are often used as main structural components in load bearing structures. Indian code on masonry accounts the reduction in strength of walls by using stress reduction factors in its design philosophy. This code was introduced in 1987 and reaffirmed in 1995. The present study investigates the use of these factors for south Indian masonry. Also, with the gaining popularity in block work construction, the aim of this study was to find out the suitability of these factors given in the Indian code to block work masonry. Normally, the load carrying capacity of masonry walls can be assessed in three ways, namely, (1) tests on masonry constituents, (2) tests on masonry prisms and (3) tests on full-scale wall specimens. Tests on bricks/blocks, cement-sand mortar, brick/block masonry prisms and 14 full-scale brick/block masonry walls formed the experimental investigation. The behavior of the walls was investigated under varying slenderness and eccentricity ratios. Hollow concrete blocks normally used as in-fill masonry can be considered as load bearing elements as its load carrying capacity was found to be high when compared to conventional brick masonry. Higher slenderness and eccentricity ratios drastically reduced the strength capacity of south Indian brick masonry walls. The reduction in strength due to slenderness and eccentricity is presented in the form of stress reduction factors in the Indian code. These factors obtained through experiments on eccentrically loaded brick masonry walls was lower while that of brick/block masonry under axial loads was higher than the values indicated in the Indian code. Also the reduction in strength is different for brick and block work masonry thus indicating the need for separate stress reduction factors for these two masonry materials.

  12. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  13. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  14. High Frequency Scattering Code in a Distributed Processing Environment

    DTIC Science & Technology

    1991-06-01

    Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3

  15. Comparison of heavy-ion transport simulations: Collision integral in a box

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen

    2018-03-01

    Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.

  16. Simulation of patch and slot antennas using FEM with prismatic elements and investigations of artificial absorber mesh termination schemes

    NASA Technical Reports Server (NTRS)

    Gong, J.; Ozdemir, T.; Volakis, J; Nurnberger, M.

    1995-01-01

    Year 1 progress can be characterized with four major achievements which are crucial toward the development of robust, easy to use antenna analysis code on doubly conformal platforms. (1) A new FEM code was developed using prismatic meshes. This code is based on a new edge based distorted prism and is particularly attractive for growing meshes associated with printed slot and patch antennas on doubly conformal platforms. It is anticipated that this technology will lead to interactive, simple to use codes for a large class of antenna geometries. Moreover, the codes can be expanded to include modeling of the circuit characteristics. An attached report describes the theory and validation of the new prismatic code using reference calculations and measured data collected at the NASA Langley facilities. The agreement between the measured and calculated data is impressive even for the coated patch configuration. (2) A scheme was developed for improved feed modeling in the context of FEM. A new approach based on the voltage continuity condition was devised and successfully tested in modeling coax cables and aperture fed antennas. An important aspect of this new feed modeling approach is the ability to completely separate the feed and antenna mesh regions. In this manner, different elements can be used in each of the regions leading to substantially improved accuracy and meshing simplicity. (3) A most important development this year has been the introduction of the perfectly matched interface (PMI) layer for truncating finite element meshes. So far the robust boundary integral method has been used for truncating the finite element meshes. However, this approach is not suitable for antennas on nonplanar platforms. The PMI layer is a lossy anisotropic absorber with zero reflection at its interface. (4) We were able to interface our antenna code FEMA_CYL (for antennas on cylindrical platforms) with a standard high frequency code. This interface was achieved by first generating equivalent magnetic currents across the antenna aperture using the FEM code. These currents were employed as the sources in the high frequency code.

  17. Increased prevalence of third-degree atrioventricular block in patients with type II diabetes mellitus.

    PubMed

    Movahed, Mohammad-Reza; Hashemzadeh, Mehrtash; Jamal, M Mazen

    2005-10-01

    Diabetes mellitus (DM) is a major risk for cardiovascular disease and mortality. There is some evidence that third-degree atrioventricular (AV) block occurs more commonly in patients with DM. In this study, we evaluated any possible association between DM and third-degree AV block using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in a very large inpatient database. We used patient treatment files containing discharge diagnoses using ICD-9 codes of inpatient treatment from all Veterans Health Administration hospitals. The cohort was stratified using the ICD-9-CM code for DM (n = 293,124), a control group with hypertension but no DM (n = 552,623), and the ICD-9 code for third-degree AV block (426.0) and smoking (305.1, V15.82). We performed multivariate analysis adjusting for coronary artery disease, congestive heart failure, smoking, and hyperlipidemia. Continuous and binary variables were analyzed using chi2 and Fisher exact tests. Third-degree AV block diagnosis was present in 3,240 of DM patients (1.1%) vs 3,367 patients (0.6%) in the control group. Using multivariate analysis, DM remained strongly associated with third-degree AV block (odds ratio, 3.1; 95% confidential interval, 3.0 to 3.3; p < 0.0001). Third-degree AV block occurs significantly more in patients with DM. This finding may, in part, explain the high cardiovascular mortality in DM patients.

  18. FANS-3D Users Guide (ESTEP Project ER 201031)

    DTIC Science & Technology

    2016-08-01

    governing laminar and turbulent flows in body-fitted curvilinear grids. The code employs multi-block overset ( chimera ) grids, including fully matched...governing incompressible flow in body-fitted grids. The code allows for multi-block overset ( chimera ) grids, which can be fully matched, arbitrarily...interested reader may consult the Chimera Overset Structured Mesh-Interpolation Code (COSMIC) Users’ Manual (Chen, 2009). The input file used for

  19. Computation of unsteady transonic aerodynamics with steady state fixed by truncation error injection

    NASA Technical Reports Server (NTRS)

    Fung, K.-Y.; Fu, J.-K.

    1985-01-01

    A novel technique is introduced for efficient computations of unsteady transonic aerodynamics. The steady flow corresponding to body shape is maintained by truncation error injection while the perturbed unsteady flows corresponding to unsteady body motions are being computed. This allows the use of different grids comparable to the characteristic length scales of the steady and unsteady flows and, hence, allows efficient computation of the unsteady perturbations. An example of typical unsteady computation of flow over a supercritical airfoil shows that substantial savings in computation time and storage without loss of solution accuracy can easily be achieved. This technique is easy to apply and requires very few changes to existing codes.

  20. Overexpression of the truncated version of ILV2 enhances glycerol production in Saccharomyces cerevisiae.

    PubMed

    Murashchenko, Lidiia; Abbas, Charles; Dmytruk, Kostyantyn; Sibirny, Andriy

    2016-08-01

    Acetolactate synthase is a mitochondrial enzyme that catalyses the conversion of two pyruvate molecules to an acetolactate molecule with release of carbon dioxide. The overexpression of the truncated version of the corresponding gene, ILV2, that codes for presumably cytosolic acetolactate synthase in the yeast Saccharomyces cerevisiae, led to a decrease in intracellular pyruvate concentration. This recombinant strain was also characterized by a four-fold increase in glycerol production, with a concomitant 1.8-fold reduction in ethanol production, when compared to that of the wild-type strain under anaerobic conditions in a glucose alcoholic fermentation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  2. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  3. Improved lossless intra coding for H.264/MPEG-4 AVC.

    PubMed

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  4. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  5. Extracellular Matrix Induced Integrin Signal Transduction and Breast Cancer Invasion.

    DTIC Science & Technology

    1995-10-01

    Metalloproteinase, breast, mammary, integrin, collagen, RGDS, matrilysin 49 breast cancer 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...Organization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter appropriate price Block 8. Performinc!_rcanization Report code...areas of necrosis in the center of the tumor; a portion of the mammary gland can be seen in the lower right . The matrilysin in situ showed

  6. Purification and spectroscopic characterization of Ctb, a group III truncated hemoglobin implicated in oxygen metabolism in the food-borne pathogen Campylobacter jejuni†

    PubMed Central

    Wainwright, Laura M.; Wang, Yinghua; Park, Simon F.; Yeh, Syun-Ru; Poole, Robert K.

    2008-01-01

    Campylobacter jejuni is a foodborne bacterial pathogen that possesses two distinct hemoglobins, encoded by the ctb and cgb genes. The former codes for a truncated hemoglobin (Ctb) in group III, an assemblage of uncharacterized globins in diverse clinically- and technologically-significant bacteria. Here, we show that Ctb purifies as a monomeric, predominantly oxygenated species. Optical spectra of ferric, ferrous, O2- and CO-bound forms resemble those of other hemoglobins. However, resonance Raman analysis shows Ctb to have an atypical νFe-CO stretching mode at 514 cm-1, compared to the other truncated hemoglobins that have been characterized so far. This implies unique roles in ligand stabilisation for TyrB10, HisE7 and TrpG8, residues highly conserved within group III truncated hemoglobins. Since C. jejuni is a microaerophile, and a ctb mutant exhibits O2-dependent growth defects, one of the hypothesised roles of Ctb is in the detoxification, sequestration or transfer of O2 The midpoint potential (Eh) of Ctb was found to be −33 mV, but no evidence was obtained in vitro to support the hypothesis that Ctb is reducible by NADH or NADPH. This truncated hemoglobin may function in the facilitation of O2 transfer to one of the terminal oxidases of C. jejuni or instead facilitate O2 transfer to Cgb for NO detoxification. PMID:16681372

  7. No evidence that protein truncating variants in BRIP1 are associated with breast cancer risk: implications for gene panel testing

    PubMed Central

    Easton, Douglas F; Lesueur, Fabienne; Decker, Brennan; Michailidou, Kyriaki; Li, Jun; Allen, Jamie; Luccarini, Craig; Pooley, Karen A; Shah, Mitul; Bolla, Manjeet K; Wang, Qin; Dennis, Joe; Ahmad, Jamil; Thompson, Ella R; Damiola, Francesca; Pertesi, Maroulio; Voegele, Catherine; Mebirouk, Noura; Robinot, Nivonirina; Durand, Geoffroy; Forey, Nathalie; Luben, Robert N; Ahmed, Shahana; Aittomäki, Kristiina; Anton-Culver, Hoda; Arndt, Volker; Baynes, Caroline; Beckman, Matthias W; Benitez, Javier; Van Den Berg, David; Blot, William J; Bogdanova, Natalia V; Bojesen, Stig E; Brenner, Hermann; Chang-Claude, Jenny; Chia, Kee Seng; Choi, Ji-Yeob; Conroy, Don M; Cox, Angela; Cross, Simon S; Czene, Kamila; Darabi, Hatef; Devilee, Peter; Eriksson, Mikael; Fasching, Peter A; Figueroa, Jonine; Flyger, Henrik; Fostira, Florentia; García-Closas, Montserrat; Giles, Graham G; Glendon, Gord; González-Neira, Anna; Guénel, Pascal; Haiman, Christopher A; Hall, Per; Hart, Steven N; Hartman, Mikael; Hooning, Maartje J; Hsiung, Chia-Ni; Ito, Hidemi; Jakubowska, Anna; James, Paul A; John, Esther M; Johnson, Nichola; Jones, Michael; Kabisch, Maria; Kang, Daehee; Kosma, Veli-Matti; Kristensen, Vessela; Lambrechts, Diether; Li, Na; Lindblom, Annika; Long, Jirong; Lophatananon, Artitaya; Lubinski, Jan; Mannermaa, Arto; Manoukian, Siranoush; Margolin, Sara; Matsuo, Keitaro; Meindl, Alfons; Mitchell, Gillian; Muir, Kenneth; Nevelsteen, Ines; van den Ouweland, Ans; Peterlongo, Paolo; Phuah, Sze Yee; Pylkäs, Katri; Rowley, Simone M; Sangrajrang, Suleeporn; Schmutzler, Rita K; Shen, Chen-Yang; Shu, Xiao-Ou; Southey, Melissa C; Surowy, Harald; Swerdlow, Anthony; Teo, Soo H; Tollenaar, Rob A E M; Tomlinson, Ian; Torres, Diana; Truong, Thérèse; Vachon, Celine; Verhoef, Senno; Wong-Brown, Michelle; Zheng, Wei; Zheng, Ying; Nevanlinna, Heli; Scott, Rodney J; Andrulis, Irene L; Wu, Anna H; Hopper, John L; Couch, Fergus J; Winqvist, Robert; Burwinkel, Barbara; Sawyer, Elinor J; Schmidt, Marjanka K; Rudolph, Anja; Dörk, Thilo; Brauch, Hiltrud; Hamann, Ute; Neuhausen, Susan L; Milne, Roger L; Fletcher, Olivia; Pharoah, Paul D P; Campbell, Ian G; Dunning, Alison M; Le Calvez-Kelm, Florence; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia

    2016-01-01

    Background BRCA1 interacting protein C-terminal helicase 1 (BRIP1) is one of the Fanconi Anaemia Complementation (FANC) group family of DNA repair proteins. Biallelic mutations in BRIP1 are responsible for FANC group J, and previous studies have also suggested that rare protein truncating variants in BRIP1 are associated with an increased risk of breast cancer. These studies have led to inclusion of BRIP1 on targeted sequencing panels for breast cancer risk prediction. Methods We evaluated a truncating variant, p.Arg798Ter (rs137852986), and 10 missense variants of BRIP1, in 48 144 cases and 43 607 controls of European origin, drawn from 41 studies participating in the Breast Cancer Association Consortium (BCAC). Additionally, we sequenced the coding regions of BRIP1 in 13 213 cases and 5242 controls from the UK, 1313 cases and 1123 controls from three population-based studies as part of the Breast Cancer Family Registry, and 1853 familial cases and 2001 controls from Australia. Results The rare truncating allele of rs137852986 was observed in 23 cases and 18 controls in Europeans in BCAC (OR 1.09, 95% CI 0.58 to 2.03, p=0.79). Truncating variants were found in the sequencing studies in 34 cases (0.21%) and 19 controls (0.23%) (combined OR 0.90, 95% CI 0.48 to 1.70, p=0.75). Conclusions These results suggest that truncating variants in BRIP1, and in particular p.Arg798Ter, are not associated with a substantial increase in breast cancer risk. Such observations have important implications for the reporting of results from breast cancer screening panels. PMID:26921362

  8. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  9. A new conformal absorbing boundary condition for finite element meshes and parallelization of FEMATS

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Volakis, J. L.; Nguyen, J.; Nurnberger, M.; Ross, D.

    1993-01-01

    Some of the progress toward the development and parallelization of an improved version of the finite element code FEMATS is described. This is a finite element code for computing the scattering by arbitrarily shaped three dimensional surfaces composite scatterers. The following tasks were worked on during the report period: (1) new absorbing boundary conditions (ABC's) for truncating the finite element mesh; (2) mixed mesh termination schemes; (3) hierarchical elements and multigridding; (4) parallelization; and (5) various modeling enhancements (antenna feeds, anisotropy, and higher order GIBC).

  10. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    NASA Astrophysics Data System (ADS)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  11. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  12. Gene expression of galectin-9/ecalectin, a potent eosinophil chemoattractant, and/or the insertional isoform in human colorectal carcinoma cell lines and detection of frame-shift mutations for protein sequence truncations in the second functional lectin domain.

    PubMed

    Lahm, H; Hoeflich, A; Andre, S; Sordat, B; Kaltner, H; Wolf, E; Gabius, H J

    2000-09-01

    The family of Ca2+-independent galactoside-binding lectins with the beta-strand topology of the jelly-roll, referred to as galectins, is known to mediate and modulate a variety of cellular activities. Their functional versatility explains the current interest in monitoring their expression in cancer research, so far primarily focused on galectin-1 and -3. Tandem-repeat-type galectin-9 and its (most probably) allelic variant ecalectin, a potent eosinophil chemoattractant, are known to be human leukocyte products. We show by RT-PCR with primers specific for both that their mRNA is expressed in 17 of 21 human colorectal cancer lines. As also indicated by restriction analysis, in addition to the expected transcript of 571 bp an otherwise identical isoform coding for a 32-amino acid extension of the link peptide was detected. Positive cell lines differentially expressed either one (7 lines) or both transcripts (10 lines). Sequence analysis of RT-PCR products, performed in four cases, allowed to assign the standard transcript to ecalectin in the case of SW480 cells and detected two point mutations in the insert of the link peptide-coding sequence in WiDr and Colo205. Furthermore, this analysis identified the insertion of a single nucleotide into the coding sequence generating a frame-shift mutation, an event which has so far not been reported for any galectin. This alteration encountered in both transcripts of the WiDr line and the isoform transcript of Colo205 cells will most likely truncate the protein part within the second (C-terminal) carbohydrate recognition domain. Our results thus reveal the presence of mRNA for a galectin-9-isoform or a potent eosinophil chemoattractant (ecalectin) or a truncated version thereof with preserved N-terminal carbohydrate recognition domain in established human colon cancer cell lines.

  13. Novel germline PALB2 truncating mutations in African-American breast cancer patients

    PubMed Central

    Zheng, Yonglan; Zhang, Jing; Niu, Qun; Huo, Dezheng; Olopade, Olufunmilayo I.

    2011-01-01

    Background It has been demonstrated that PALB2 acts as a bridging molecule between the BRCA1 and BRCA2 proteins and is responsible for facilitating BRCA2-mediated DNA repair. Truncating mutations in the PALB2 gene have been reported to be enriched in Fanconi anemia and breast cancer patients in various populations. Methods We evaluated the contribution of PALB2 germline mutations in 279 African-American breast cancer patients including 29 patients with a strong family history, 29 patients with a moderate family history, 75 patients with a weak family history, and 146 non-familial or sporadic breast cancer cases. Results After direct sequencing of all the coding exons, exon/intron boundaries, 5′UTR and 3′UTR of PALB2, three (1.08%; 3 in 279) novel monoallelic truncating mutations were identified: c.758dupT (exon4), c.1479delC (exon4) and c.3048delT (exon 10); together with 50 sequence variants, 27 of which are novel. None of the truncating mutations were found in 262 controls from the same population. Conclusions PALB2 mutations are present in both familial and non-familial breast cancer among African-Americans. Rare PALB2 mutations account for a small but substantial proportion of breast cancer patients. PMID:21932393

  14. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  15. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  16. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.

    1998-01-01

    It is well known that the BER performance of a parallel concatenated turbo-code improves roughly as 1/N, where N is the information block length. However, it has been observed by Benedetto and Montorsi that for most parallel concatenated turbo-codes, the FER performance does not improve monotonically with N. In this report, we study the FER of turbo-codes, and the effects of their concatenation with an outer code. Two methods of concatenation are investigated: across several frames and within each frame. Some asymmetric codes are shown to have excellent FER performance with an information block length of 16384. We also show that the proposed outer coding schemes can improve the BER performance as well by eliminating pathological frames generated by the iterative MAP decoding process.

  17. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  18. Guide on the Effective Block Approach for the Fatigue Life Assessment of Metallic Structures

    DTIC Science & Technology

    2013-01-01

    Load Interpretation Truncation Validation coupon test program NDI Non-Destructive Inspection QF Quantitative Fractography RAAF Royal Australian...even more-so with the advent of quantitative fractography . 3 LEFM forms the basis of most state-of-art CG models. UNCLASSIFIED 1 UNCLASSIFIED DSTO...preferred method for obtaining the CGR data is by quantitative fractography (QF). This method is well suited to small cracks where other measurement

  19. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  20. Dynamic Detection of Malicious Code in COTS Software

    DTIC Science & Technology

    2000-04-01

    run the following documented hostile applets or ActiveX of these tools work only on mobile code (Java, ActiveX , controls: 16-11 Hostile Applets Tiny...Killer App Exploder Runner ActiveX Check Spy eSafe Protect Desktop 9/9 blocked NB B NB 13/17 blocked NB Surfinshield Online 9/9 blocked NB B B 13/17...Exploder is an ActiveX control top (@). that performs a clean shutdown of your computer. The interface is attractive, although rather complex, as McLain’s

  1. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  2. A Shifted Block Lanczos Algorithm 1: The Block Recurrence

    NASA Technical Reports Server (NTRS)

    Grimes, Roger G.; Lewis, John G.; Simon, Horst D.

    1990-01-01

    In this paper we describe a block Lanczos algorithm that is used as the key building block of a software package for the extraction of eigenvalues and eigenvectors of large sparse symmetric generalized eigenproblems. The software package comprises: a version of the block Lanczos algorithm specialized for spectrally transformed eigenproblems; an adaptive strategy for choosing shifts, and efficient codes for factoring large sparse symmetric indefinite matrices. This paper describes the algorithmic details of our block Lanczos recurrence. This uses a novel combination of block generalizations of several features that have only been investigated independently in the past. In particular new forms of partial reorthogonalization, selective reorthogonalization and local reorthogonalization are used, as is a new algorithm for obtaining the M-orthogonal factorization of a matrix. The heuristic shifting strategy, the integration with sparse linear equation solvers and numerical experience with the code are described in a companion paper.

  3. WHIM syndrome caused by a single amino acid substitution in the carboxy-tail of chemokine receptor CXCR4

    PubMed Central

    Liu, Qian; Chen, Haoqian; Ojode, Teresa; Gao, Xiangxi; Anaya-O'Brien, Sandra; Turner, Nicholas A.; Ulrick, Jean; DeCastro, Rosamma; Kelly, Corin; Cardones, Adela R.; Gold, Stuart H.; Hwang, Eugene I.; Wechsler, Daniel S.; Malech, Harry L.; Murphy, Philip M.

    2012-01-01

    WHIM syndrome is a rare, autosomal dominant, immunodeficiency disorder so-named because it is characterized by warts, hypogammaglobulinemia, infections, and myelokathexis (defective neutrophil egress from the BM). Gain-of-function mutations that truncate the C-terminus of the chemokine receptor CXCR4 by 10-19 amino acids cause WHIM syndrome. We have identified a family with autosomal dominant inheritance of WHIM syndrome that is caused by a missense mutation in CXCR4, E343K (1027G → A). This mutation is also located in the C-terminal domain, a region responsible for negative regulation of the receptor. Accordingly, like CXCR4R334X, the most common truncation mutation in WHIM syndrome, CXCR4E343K mediated approximately 2-fold increased signaling in calcium flux and chemotaxis assays relative to wild-type CXCR4; however, CXCR4E343K had a reduced effect on blocking normal receptor down-regulation from the cell surface. Therefore, in addition to truncating mutations in the C-terminal domain of CXCR4, WHIM syndrome may be caused by a single charge-changing amino acid substitution in this domain, E343K, that results in increased receptor signaling. PMID:22596258

  4. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  5. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  6. Dynamic Modeling of GAIT System Reveals Transcriptome Expansion and Translational Trickle Control Device

    PubMed Central

    Yao, Peng; Potdar, Alka A.; Arif, Abul; Ray, Partho Sarothi; Mukhopadhyay, Rupak; Willard, Belinda; Xu, Yichi; Yan, Jun; Saidel, Gerald M.; Fox, Paul L.

    2012-01-01

    SUMMARY Post-transcriptional regulatory mechanisms superimpose “fine-tuning” control upon “on-off” switches characteristic of gene transcription. We have exploited computational modeling with experimental validation to resolve an anomalous relationship between mRNA expression and protein synthesis. Differential GAIT (Gamma-interferon Activated Inhibitor of Translation) complex activation repressed VEGF-A synthesis to a low, constant rate despite high, variable VEGFA mRNA expression. Dynamic model simulations indicated the presence of an unidentified, inhibitory GAIT element-interacting factor. We discovered a truncated form of glutamyl-prolyl tRNA synthetase (EPRS), the GAIT constituent that binds the 3’-UTR GAIT element in target transcripts. The truncated protein, EPRSN1, prevents binding of functional GAIT complex. EPRSN1 mRNA is generated by a remarkable polyadenylation-directed conversion of a Tyr codon in the EPRS coding sequence to a stop codon (PAY*). By low-level protection of GAIT element-bearing transcripts, EPRSN1 imposes a robust “translational trickle” of target protein expression. Genome-wide analysis shows PAY* generates multiple truncated transcripts thereby contributing to transcriptome expansion. PMID:22386318

  7. Symplectic Propagation of the Map, Tangent Map and Tangent Map Derivative through Quadrupole and Combined-Function Dipole Magnets without Truncation

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.

    1998-04-01

    The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.

  8. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  9. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  10. A POT1 mutation implicates defective telomere end fill-in and telomere truncations in Coats plus

    PubMed Central

    Takai, Hiroyuki; Jenkinson, Emma; Kabir, Shaheen; Babul-Hirji, Riyana; Najm-Tehrani, Nasrin; Chitayat, David A.; Crow, Yanick J.; de Lange, Titia

    2016-01-01

    Coats plus (CP) can be caused by mutations in the CTC1 component of CST, which promotes polymerase α (polα)/primase-dependent fill-in throughout the genome and at telomeres. The cellular pathology relating to CP has not been established. We identified a homozygous POT1 S322L substitution (POT1CP) in two siblings with CP. POT1CP induced a proliferative arrest that could be bypassed by telomerase. POT1CP was expressed at normal levels, bound TPP1 and telomeres, and blocked ATR signaling. POT1CP was defective in regulating telomerase, leading to telomere elongation rather than the telomere shortening observed in other telomeropathies. POT1CP was also defective in the maintenance of the telomeric C strand, causing extended 3′ overhangs and stochastic telomere truncations that could be healed by telomerase. Consistent with shortening of the telomeric C strand, metaphase chromosomes showed loss of telomeres synthesized by leading strand DNA synthesis. We propose that CP is caused by a defect in POT1/CST-dependent telomere fill-in. We further propose that deficiency in the fill-in step generates truncated telomeres that halt proliferation in cells lacking telomerase, whereas, in tissues expressing telomerase (e.g., bone marrow), the truncations are healed. The proposed etiology can explain why CP presents with features distinct from those associated with telomerase defects (e.g., dyskeratosis congenita). PMID:27013236

  11. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  12. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  13. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept.

    PubMed

    Lee, Ho; Fahimian, Benjamin P; Xing, Lei

    2017-03-21

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  14. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  15. Model systems: how chemical biologists study RNA

    PubMed Central

    Rios, Andro C.; Tor, Yitzhak

    2009-01-01

    Ribonucleic acids are structurally and functionally sophisticated biomolecules and the use of models, frequently truncated or modified sequences representing functional domains of the natural systems, is essential to their exploration. Functional non-coding RNAs such as miRNAs, riboswitches, and, in particular, ribozymes, have changed the view of RNA’s role in biology and its catalytic potential. The well-known truncated hammerhead model has recently been refined and new data provide a clearer molecular picture of the elements responsible for its catalytic power. A model for the spliceosome, a massive and highly intricate ribonucleoprotein, is also emerging, although its true utility is yet to be cemented. Such catalytic model systems could also serve as “chemo-paleontological” tools, further refining the RNA world hypothesis and its relevance to the origin and evolution of life. PMID:19879179

  16. Chlorine-induced assembly of a cationic coordination cage with a μ5-carbonato-bridged Mn(II)24 core.

    PubMed

    Xiong, Ke-Cai; Jiang, Fei-Long; Gai, Yan-Li; Yuan, Da-Qiang; Han, Dong; Ma, Jie; Zhang, Shu-Quan; Hong, Mao-Chun

    2012-04-27

    Chlorine caged in! The chlorine-induced assembly of six shuttlecock-like tetranuclear Mn(II) building blocks generated in situ based on p-tert-butylthiacalix[4]arene and facial anions gave rise to a novel truncated distorted octahedral cationic coordination cage with a μ(5)-carbonato-bridged Mn(II)(24) core. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A motion compensation technique using sliced blocks and its application to hybrid video coding

    NASA Astrophysics Data System (ADS)

    Kondo, Satoshi; Sasai, Hisao

    2005-07-01

    This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.

  18. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  19. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  20. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  1. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  2. A human haploid gene trap collection to study lncRNAs with unusual RNA biology.

    PubMed

    Kornienko, Aleksandra E; Vlatkovic, Irena; Neesen, Jürgen; Barlow, Denise P; Pauler, Florian M

    2016-01-01

    Many thousand long non-coding (lnc) RNAs are mapped in the human genome. Time consuming studies using reverse genetic approaches by post-transcriptional knock-down or genetic modification of the locus demonstrated diverse biological functions for a few of these transcripts. The Human Gene Trap Mutant Collection in haploid KBM7 cells is a ready-to-use tool for studying protein-coding gene function. As lncRNAs show remarkable differences in RNA biology compared to protein-coding genes, it is unclear if this gene trap collection is useful for functional analysis of lncRNAs. Here we use the uncharacterized LOC100288798 lncRNA as a model to answer this question. Using public RNA-seq data we show that LOC100288798 is ubiquitously expressed, but inefficiently spliced. The minor spliced LOC100288798 isoforms are exported to the cytoplasm, whereas the major unspliced isoform is nuclear localized. This shows that LOC100288798 RNA biology differs markedly from typical mRNAs. De novo assembly from RNA-seq data suggests that LOC100288798 extends 289kb beyond its annotated 3' end and overlaps the downstream SLC38A4 gene. Three cell lines with independent gene trap insertions in LOC100288798 were available from the KBM7 gene trap collection. RT-qPCR and RNA-seq confirmed successful lncRNA truncation and its extended length. Expression analysis from RNA-seq data shows significant deregulation of 41 protein-coding genes upon LOC100288798 truncation. Our data shows that gene trap collections in human haploid cell lines are useful tools to study lncRNAs, and identifies the previously uncharacterized LOC100288798 as a potential gene regulator.

  3. 47 CFR 52.20 - Thousands-block number pooling.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Thousands-block number pooling. 52.20 Section... (CONTINUED) NUMBERING Number Portability § 52.20 Thousands-block number pooling. (a) Definition. Thousands-block number pooling is a process by which the 10,000 numbers in a central office code (NXX) are...

  4. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  5. No evidence that protein truncating variants in BRIP1 are associated with breast cancer risk: implications for gene panel testing.

    PubMed

    Easton, Douglas F; Lesueur, Fabienne; Decker, Brennan; Michailidou, Kyriaki; Li, Jun; Allen, Jamie; Luccarini, Craig; Pooley, Karen A; Shah, Mitul; Bolla, Manjeet K; Wang, Qin; Dennis, Joe; Ahmad, Jamil; Thompson, Ella R; Damiola, Francesca; Pertesi, Maroulio; Voegele, Catherine; Mebirouk, Noura; Robinot, Nivonirina; Durand, Geoffroy; Forey, Nathalie; Luben, Robert N; Ahmed, Shahana; Aittomäki, Kristiina; Anton-Culver, Hoda; Arndt, Volker; Baynes, Caroline; Beckman, Matthias W; Benitez, Javier; Van Den Berg, David; Blot, William J; Bogdanova, Natalia V; Bojesen, Stig E; Brenner, Hermann; Chang-Claude, Jenny; Chia, Kee Seng; Choi, Ji-Yeob; Conroy, Don M; Cox, Angela; Cross, Simon S; Czene, Kamila; Darabi, Hatef; Devilee, Peter; Eriksson, Mikael; Fasching, Peter A; Figueroa, Jonine; Flyger, Henrik; Fostira, Florentia; García-Closas, Montserrat; Giles, Graham G; Glendon, Gord; González-Neira, Anna; Guénel, Pascal; Haiman, Christopher A; Hall, Per; Hart, Steven N; Hartman, Mikael; Hooning, Maartje J; Hsiung, Chia-Ni; Ito, Hidemi; Jakubowska, Anna; James, Paul A; John, Esther M; Johnson, Nichola; Jones, Michael; Kabisch, Maria; Kang, Daehee; Kosma, Veli-Matti; Kristensen, Vessela; Lambrechts, Diether; Li, Na; Lindblom, Annika; Long, Jirong; Lophatananon, Artitaya; Lubinski, Jan; Mannermaa, Arto; Manoukian, Siranoush; Margolin, Sara; Matsuo, Keitaro; Meindl, Alfons; Mitchell, Gillian; Muir, Kenneth; Nevelsteen, Ines; van den Ouweland, Ans; Peterlongo, Paolo; Phuah, Sze Yee; Pylkäs, Katri; Rowley, Simone M; Sangrajrang, Suleeporn; Schmutzler, Rita K; Shen, Chen-Yang; Shu, Xiao-Ou; Southey, Melissa C; Surowy, Harald; Swerdlow, Anthony; Teo, Soo H; Tollenaar, Rob A E M; Tomlinson, Ian; Torres, Diana; Truong, Thérèse; Vachon, Celine; Verhoef, Senno; Wong-Brown, Michelle; Zheng, Wei; Zheng, Ying; Nevanlinna, Heli; Scott, Rodney J; Andrulis, Irene L; Wu, Anna H; Hopper, John L; Couch, Fergus J; Winqvist, Robert; Burwinkel, Barbara; Sawyer, Elinor J; Schmidt, Marjanka K; Rudolph, Anja; Dörk, Thilo; Brauch, Hiltrud; Hamann, Ute; Neuhausen, Susan L; Milne, Roger L; Fletcher, Olivia; Pharoah, Paul D P; Campbell, Ian G; Dunning, Alison M; Le Calvez-Kelm, Florence; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia

    2016-05-01

    BRCA1 interacting protein C-terminal helicase 1 (BRIP1) is one of the Fanconi Anaemia Complementation (FANC) group family of DNA repair proteins. Biallelic mutations in BRIP1 are responsible for FANC group J, and previous studies have also suggested that rare protein truncating variants in BRIP1 are associated with an increased risk of breast cancer. These studies have led to inclusion of BRIP1 on targeted sequencing panels for breast cancer risk prediction. We evaluated a truncating variant, p.Arg798Ter (rs137852986), and 10 missense variants of BRIP1, in 48 144 cases and 43 607 controls of European origin, drawn from 41 studies participating in the Breast Cancer Association Consortium (BCAC). Additionally, we sequenced the coding regions of BRIP1 in 13 213 cases and 5242 controls from the UK, 1313 cases and 1123 controls from three population-based studies as part of the Breast Cancer Family Registry, and 1853 familial cases and 2001 controls from Australia. The rare truncating allele of rs137852986 was observed in 23 cases and 18 controls in Europeans in BCAC (OR 1.09, 95% CI 0.58 to 2.03, p=0.79). Truncating variants were found in the sequencing studies in 34 cases (0.21%) and 19 controls (0.23%) (combined OR 0.90, 95% CI 0.48 to 1.70, p=0.75). These results suggest that truncating variants in BRIP1, and in particular p.Arg798Ter, are not associated with a substantial increase in breast cancer risk. Such observations have important implications for the reporting of results from breast cancer screening panels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Cloning, characterisation, and comparative quantitative expression analyses of receptor for advanced glycation end products (RAGE) transcript forms.

    PubMed

    Sterenczak, Katharina A; Willenbrock, Saskia; Barann, Matthias; Klemke, Markus; Soller, Jan T; Eberle, Nina; Nolte, Ingo; Bullerdiek, Jörn; Murua Escobar, Hugo

    2009-04-01

    RAGE is a member of the immunoglobulin superfamily of cell surface molecules playing key roles in pathophysiological processes, e.g. immune/inflammatory disorders, Alzheimer's disease, diabetic arteriosclerosis and tumourigenesis. In humans 19 naturally occurring RAGE splicing variants resulting in either N-terminally or C-terminally truncated proteins were identified and are lately discussed as mechanisms for receptor regulation. Accordingly, deregulation of sRAGE levels has been associated with several diseases e.g. Alzheimer's disease, Type 1 diabetes, and rheumatoid arthritis. Administration of recombinant sRAGE to animal models of cancer blocked tumour growth successfully. In spite of its obvious relationship to cancer and metastasis data focusing sRAGE deregulation and tumours is rare. In this study we screened a set of tumours, healthy tissues and various cancer cell lines for RAGE splicing variants and analysed their structure. Additionally, we analysed the ratio of the mainly found transcript variants using quantitative Real-Time PCR. In total we characterised 24 previously not described canine and 4 human RAGE splicing variants, analysed their structure, classified their characteristics, and derived their respective protein forms. Interestingly, the healthy and the neoplastic tissue samples showed in majority RAGE transcripts coding for the complete receptor and transcripts showing insertions of intron 1.

  7. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  8. Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP

    NASA Astrophysics Data System (ADS)

    Maruyama, Soh; Fujimoto, Nozomu; Kiso, Yoshihiro; Murakami, Tomoyuki; Sudo, Yukio

    1988-09-01

    This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T(sub 1-M)) with simulated fuel rods and fuel blocks.

  9. Truncating variants in the majority of the cytoplasmic domain of PCDH15 are unlikely to cause Usher syndrome 1F.

    PubMed

    Perreault-Micale, Cynthia; Frieden, Alexander; Kennedy, Caleb J; Neitzel, Dana; Sullivan, Jessica; Faulkner, Nicole; Hallam, Stephanie; Greger, Valerie

    2014-11-01

    Loss of function variants in the PCDH15 gene can cause Usher syndrome type 1F, an autosomal recessive disease associated with profound congenital hearing loss, vestibular dysfunction, and retinitis pigmentosa. The Ashkenazi Jewish population has an increased incidence of Usher syndrome type 1F (founder variant p.Arg245X accounts for 75% of alleles), yet the variant spectrum in a panethnic population remains undetermined. We sequenced the coding region and intron-exon borders of PCDH15 using next-generation DNA sequencing technology in approximately 14,000 patients from fertility clinics. More than 600 unique PCDH15 variants (single nucleotide changes and small indels) were identified, including previously described pathogenic variants p.Arg3X, p.Arg245X (five patients), p.Arg643X, p.Arg929X, and p.Arg1106X. Novel truncating variants were also found, including one in the N-terminal extracellular domain (p.Leu877X), but all other novel truncating variants clustered in the exon 33 encoded C-terminal cytoplasmic domain (52 patients, 14 variants). One variant was observed predominantly in African Americans (carrier frequency of 2.3%). The high incidence of truncating exon 33 variants indicates that they are unlikely to cause Usher syndrome type 1F even though many remove a large portion of the gene. They may be tolerated because PCDH15 has several alternate cytoplasmic domain exons and differentially spliced isoforms may function redundantly. Effects of some PCDH15 truncating variants were addressed by deep sequencing of a panethnic population. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  10. Increasing the Yield in Targeted Next-Generation Sequencing by Implicating CNV Analysis, Non-Coding Exons and the Overall Variant Load: The Example of Retinal Dystrophies

    PubMed Central

    Eisenberger, Tobias; Neuhaus, Christine; Khan, Arif O.; Decker, Christian; Preising, Markus N.; Friedburg, Christoph; Bieg, Anika; Gliem, Martin; Issa, Peter Charbel; Holz, Frank G.; Baig, Shahid M.; Hellenbroich, Yorck; Galvez, Alberto; Platzer, Konrad; Wollnik, Bernd; Laddach, Nadja; Ghaffari, Saeed Reza; Rafati, Maryam; Botzenhart, Elke; Tinschert, Sigrid; Börger, Doris; Bohring, Axel; Schreml, Julia; Körtge-Jung, Stefani; Schell-Apacik, Chayim; Bakur, Khadijah; Al-Aama, Jumana Y.; Neuhann, Teresa; Herkenrath, Peter; Nürnberg, Gudrun; Nürnberg, Peter; Davis, John S.; Gal, Andreas; Bergmann, Carsten; Lorenz, Birgit; Bolz, Hanno J.

    2013-01-01

    Retinitis pigmentosa (RP) and Leber congenital amaurosis (LCA) are major causes of blindness. They result from mutations in many genes which has long hampered comprehensive genetic analysis. Recently, targeted next-generation sequencing (NGS) has proven useful to overcome this limitation. To uncover “hidden mutations” such as copy number variations (CNVs) and mutations in non-coding regions, we extended the use of NGS data by quantitative readout for the exons of 55 RP and LCA genes in 126 patients, and by including non-coding 5′ exons. We detected several causative CNVs which were key to the diagnosis in hitherto unsolved constellations, e.g. hemizygous point mutations in consanguineous families, and CNVs complemented apparently monoallelic recessive alleles. Mutations of non-coding exon 1 of EYS revealed its contribution to disease. In view of the high carrier frequency for retinal disease gene mutations in the general population, we considered the overall variant load in each patient to assess if a mutation was causative or reflected accidental carriership in patients with mutations in several genes or with single recessive alleles. For example, truncating mutations in RP1, a gene implicated in both recessive and dominant RP, were causative in biallelic constellations, unrelated to disease when heterozygous on a biallelic mutation background of another gene, or even non-pathogenic if close to the C-terminus. Patients with mutations in several loci were common, but without evidence for di- or oligogenic inheritance. Although the number of targeted genes was low compared to previous studies, the mutation detection rate was highest (70%) which likely results from completeness and depth of coverage, and quantitative data analysis. CNV analysis should routinely be applied in targeted NGS, and mutations in non-coding exons give reason to systematically include 5′-UTRs in disease gene or exome panels. Consideration of all variants is indispensable because even truncating mutations may be misleading. PMID:24265693

  11. Increasing the yield in targeted next-generation sequencing by implicating CNV analysis, non-coding exons and the overall variant load: the example of retinal dystrophies.

    PubMed

    Eisenberger, Tobias; Neuhaus, Christine; Khan, Arif O; Decker, Christian; Preising, Markus N; Friedburg, Christoph; Bieg, Anika; Gliem, Martin; Charbel Issa, Peter; Holz, Frank G; Baig, Shahid M; Hellenbroich, Yorck; Galvez, Alberto; Platzer, Konrad; Wollnik, Bernd; Laddach, Nadja; Ghaffari, Saeed Reza; Rafati, Maryam; Botzenhart, Elke; Tinschert, Sigrid; Börger, Doris; Bohring, Axel; Schreml, Julia; Körtge-Jung, Stefani; Schell-Apacik, Chayim; Bakur, Khadijah; Al-Aama, Jumana Y; Neuhann, Teresa; Herkenrath, Peter; Nürnberg, Gudrun; Nürnberg, Peter; Davis, John S; Gal, Andreas; Bergmann, Carsten; Lorenz, Birgit; Bolz, Hanno J

    2013-01-01

    Retinitis pigmentosa (RP) and Leber congenital amaurosis (LCA) are major causes of blindness. They result from mutations in many genes which has long hampered comprehensive genetic analysis. Recently, targeted next-generation sequencing (NGS) has proven useful to overcome this limitation. To uncover "hidden mutations" such as copy number variations (CNVs) and mutations in non-coding regions, we extended the use of NGS data by quantitative readout for the exons of 55 RP and LCA genes in 126 patients, and by including non-coding 5' exons. We detected several causative CNVs which were key to the diagnosis in hitherto unsolved constellations, e.g. hemizygous point mutations in consanguineous families, and CNVs complemented apparently monoallelic recessive alleles. Mutations of non-coding exon 1 of EYS revealed its contribution to disease. In view of the high carrier frequency for retinal disease gene mutations in the general population, we considered the overall variant load in each patient to assess if a mutation was causative or reflected accidental carriership in patients with mutations in several genes or with single recessive alleles. For example, truncating mutations in RP1, a gene implicated in both recessive and dominant RP, were causative in biallelic constellations, unrelated to disease when heterozygous on a biallelic mutation background of another gene, or even non-pathogenic if close to the C-terminus. Patients with mutations in several loci were common, but without evidence for di- or oligogenic inheritance. Although the number of targeted genes was low compared to previous studies, the mutation detection rate was highest (70%) which likely results from completeness and depth of coverage, and quantitative data analysis. CNV analysis should routinely be applied in targeted NGS, and mutations in non-coding exons give reason to systematically include 5'-UTRs in disease gene or exome panels. Consideration of all variants is indispensable because even truncating mutations may be misleading.

  12. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  13. QX MAN: Q and X file manipulation

    NASA Technical Reports Server (NTRS)

    Krein, Mark A.

    1992-01-01

    QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.

  14. A new reconstruction of the Paleozoic continental margin of southwestern North America: Implications for the nature and timing of continental truncation and the possible role of the Mojave-Sonora megashear

    USGS Publications Warehouse

    Stevens, C.H.; Stone, P.; Miller, J.S.

    2005-01-01

    Data bearing on interpretations of the Paleozoic and Mesozoic paleogeography of southwestern North America are important for testing the hypothesis that the Paleozoic miogeocline in this region has been tectonically truncated, and if so, for ascertaining the time of the event and the possible role of the Mojave-Sonora megashear. Here, we present an analysis of existing and new data permitting reconstruction of the Paleozoic continental margin of southwestern North America. Significant new and recent information incorporated into this reconstruction includes (1) spatial distribution of Middle to Upper Devonian continental-margin facies belts, (2) positions of other paleogeographically significant sedimentary boundaries on the Paleozoic continental shelf, (3) distribution of Upper Permian through Upper Triassic plutonic rocks, and (4) evidence that the southern Sierra Nevada and western Mojave Desert are underlain by continental crust. After restoring the geology of western Nevada and California along known and inferred strike-slip faults, we find that the Devonian facies belts and pre-Pennsylvanian sedimentary boundaries define an arcuate, generally south-trending continental margin that appears to be truncated on the southwest. A Pennsylvanian basin, a Permian coral belt, and a belt of Upper Permian to Upper Triassic plutons stretching from Sonora, Mexico, into westernmost central Nevada, cut across the older facies belts, suggesting that truncation of the continental margin occurred in the Pennsylvanian. We postulate that the main truncating structure was a left-lateral transform fault zone that extended from the Mojave-Sonora megashear in northwestern Mexico to the Foothills Suture in California. The Caborca block of northwestern Mexico, where Devonian facies belts and pre-Pennsylvanian sedimentary boundaries like those in California have been identified, is interpreted to represent a missing fragment of the continental margin that underwent ???400 km of left-lateral displacement along this fault zone. If this model is correct, the Mojave-Sonora megashear played a direct role in the Pennsylvanian truncation of the continental margin, and any younger displacement on this fault has been relatively small. ?? 2005 Geological Society of America.

  15. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  16. An In vitro evaluation of the reliability of QR code denture labeling technique.

    PubMed

    Poovannan, Sindhu; Jain, Ashish R; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R

    2016-01-01

    Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method.

  17. A conserved truncated isoform of the ATR-X syndrome protein lacking the SWI/SNF-homology domain.

    PubMed

    Garrick, David; Samara, Vassiliki; McDowell, Tarra L; Smith, Andrew J H; Dobbie, Lorraine; Higgs, Douglas R; Gibbons, Richard J

    2004-02-04

    Mutations in the ATRX gene cause a severe X-linked mental retardation syndrome that is frequently associated with alpha thalassemia (ATR-X syndrome). The previously characterized ATRX protein (approximately 280 kDa) contains both a Plant homeodomain (PHD)-like zinc finger motif as well as an ATPase domain of the SNF2 family. These motifs suggest that ATRX may function as a regulator of gene expression, probably by exerting an effect on chromatin structure, although the exact cellular role of ATRX has not yet been fully elucidated. Here we characterize a truncated (approximately 200 kDa) isoform of ATRX (called here ATRXt) that has been highly conserved between mouse and human. In both species, ATRXt arises due to the failure to splice intron 11 from the primary transcript, and the use of a proximal intronic poly(A) signal. We show that the relative expression of the full length and ATRXt isoforms is subject to tissue-specific regulation. The ATRXt isoform contains the PHD-like domain but not the SWI/SNF-like motifs and is therefore unlikely to be functionally equivalent to the full length protein. We used indirect immunofluorescence to demonstrate that the full length and ATRXt isoforms are colocalized at blocks of pericentromeric heterochromatin but unlike full length ATRX, the truncated isoform does not associate with promyelocytic leukemia (PML) nuclear bodies. The high degree of conservation of ATRXt and the tight regulation of its expression relative to the full length protein suggest that this truncated isoform fulfills an important biological function.

  18. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  19. Fingerprints of Modified RNA Bases from Deep Sequencing Profiles.

    PubMed

    Kietrys, Anna M; Velema, Willem A; Kool, Eric T

    2017-11-29

    Posttranscriptional modifications of RNA bases are not only found in many noncoding RNAs but have also recently been identified in coding (messenger) RNAs as well. They require complex and laborious methods to locate, and many still lack methods for localized detection. Here we test the ability of next-generation sequencing (NGS) to detect and distinguish between ten modified bases in synthetic RNAs. We compare ultradeep sequencing patterns of modified bases, including miscoding, insertions and deletions (indels), and truncations, to unmodified bases in the same contexts. The data show widely varied responses to modification, ranging from no response, to high levels of mutations, insertions, deletions, and truncations. The patterns are distinct for several of the modifications, and suggest the future use of ultradeep sequencing as a fingerprinting strategy for locating and identifying modifications in cellular RNAs.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  1. Prevalence of PALB2 mutations in breast cancer patients in multi-ethnic Asian population in Malaysia and Singapore.

    PubMed

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations.

  2. Prevalence of PALB2 Mutations in Breast Cancer Patients in Multi-Ethnic Asian Population in Malaysia and Singapore

    PubMed Central

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    Background The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. Methods We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. Results By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations. PMID:23977390

  3. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  4. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  5. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  6. Code Optimization Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAGEE,GLEN I.

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less

  7. Frequent Truncating Mutation of TFAM Induces Mitochondrial DNA Depletion and Apoptotic Resistance in Microsatellite-Unstable Colorectal Cancer

    PubMed Central

    Guo, Jianhui; Zheng, Li; Liu, Wenyong; Wang, Xianshu; Wang, Zemin; Wang, Zehua; French, Amy J.; Kang, Dongchon; Chen, Lin; Thibodeau, Stephen N.; Liu, Wanguo

    2013-01-01

    The mitochondrial transcription factor A (TFAM) is required for mitochondrial DNA (mtDNA) replication and transcription. Disruption of TFAM results in heart failure and premature aging in mice. But very little is known about the role of TFAM in cancer development. Here, we report the identification of frequent frameshift mutations in the coding mononucleotide repeat of TFAM in sporadic colorectal cancer (CRC) cell lines and in primary tumors with microsatellite instability (MSI), but not in microsatellite stable (MSS) CRC cell lines and tumors. The presence of the TFAM truncating mutation, in CRC cells with MSI, reduced the TFAM protein level in vivo and in vitro and correlated with mtDNA depletion. Furthermore, forced overexpression of wild-type TFAM in RKO cells carrying a TFAM truncating mutation suppressed cell proliferation and inhibited RKO cell-induced xenograft tumor growth. Moreover, these cells showed more susceptibility to cisplatin-induced apoptosis due to an increase of cytochrome b (Cyt b) expression and its release from mitochondria. An interaction assay between TFAM and the heavy-strand promoter (HSP) of mitochondria revealed that mutant TFAM exhibited reduced binding to HSP, leading to reduction in Cyt b transcription. Collectively, these data provide evidence that a high incidence of TFAM truncating mutations leads to mitochondrial copy number reduction and mitochondrial instability, distinguishing most CRC with MSI from MSS CRC. These mutations may play an important role in tumorigenesis and cisplatin-induced apoptotic resistance of most microsatellite-unstable CRCs. PMID:21467167

  8. Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies.

    PubMed

    Salter, Claire G; Beijer, Danique; Hardy, Holly; Barwick, Katy E S; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A; McEntagart, Meriel M; Chioza, Barry A; Blakely, Randy D; Chilton, John K; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L; Walk, David; Crosby, Andrew H

    2018-04-01

    To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7 , predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction.

  9. Performance of data-compression codes in channels with errors. Final report, October 1986-January 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-10-01

    Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.

  10. Reducing C-Terminal-Truncated Alpha-Synuclein by Immunotherapy Attenuates Neurodegeneration and Propagation in Parkinson's Disease-Like Models

    PubMed Central

    Games, Dora; Valera, Elvira; Spencer, Brian; Rockenstein, Edward; Mante, Michael; Adame, Anthony; Patrick, Christina; Ubhi, Kiren; Nuber, Silke; Sacayon, Patricia; Zago, Wagner; Seubert, Peter; Barbour, Robin; Schenk, Dale

    2014-01-01

    Parkinson's disease (PD) and dementia with Lewy bodies (DLB) are common neurodegenerative disorders of the aging population, characterized by progressive and abnormal accumulation of α-synuclein (α-syn). Recent studies have shown that C-terminus (CT) truncation and propagation of α-syn play a role in the pathogenesis of PD/DLB. Therefore, we explored the effect of passive immunization against the CT of α-syn in the mThy1-α-syn transgenic (tg) mouse model, which resembles the striato-nigral and motor deficits of PD. Mice were immunized with the new monoclonal antibodies 1H7, 5C1, or 5D12, all directed against the CT of α-syn. CT α-syn antibodies attenuated synaptic and axonal pathology, reduced the accumulation of CT-truncated α-syn (CT-α-syn) in axons, rescued the loss of tyrosine hydroxylase fibers in striatum, and improved motor and memory deficits. Among them, 1H7 and 5C1 were most effective at decreasing levels of CT-α-syn and higher-molecular-weight aggregates. Furthermore, in vitro studies showed that preincubation of recombinant α-syn with 1H7 and 5C1 prevented CT cleavage of α-syn. In a cell-based system, CT antibodies reduced cell-to-cell propagation of full-length α-syn, but not of the CT-α-syn that lacked the 118–126 aa recognition site needed for antibody binding. Furthermore, the results obtained after lentiviral expression of α-syn suggest that antibodies might be blocking the extracellular truncation of α-syn by calpain-1. Together, these results demonstrate that antibodies against the CT of α-syn reduce levels of CT-truncated fragments of the protein and its propagation, thus ameliorating PD-like pathology and improving behavioral and motor functions in a mouse model of this disease. PMID:25009275

  11. Reducing C-terminal-truncated alpha-synuclein by immunotherapy attenuates neurodegeneration and propagation in Parkinson's disease-like models.

    PubMed

    Games, Dora; Valera, Elvira; Spencer, Brian; Rockenstein, Edward; Mante, Michael; Adame, Anthony; Patrick, Christina; Ubhi, Kiren; Nuber, Silke; Sacayon, Patricia; Zago, Wagner; Seubert, Peter; Barbour, Robin; Schenk, Dale; Masliah, Eliezer

    2014-07-09

    Parkinson's disease (PD) and dementia with Lewy bodies (DLB) are common neurodegenerative disorders of the aging population, characterized by progressive and abnormal accumulation of α-synuclein (α-syn). Recent studies have shown that C-terminus (CT) truncation and propagation of α-syn play a role in the pathogenesis of PD/DLB. Therefore, we explored the effect of passive immunization against the CT of α-syn in the mThy1-α-syn transgenic (tg) mouse model, which resembles the striato-nigral and motor deficits of PD. Mice were immunized with the new monoclonal antibodies 1H7, 5C1, or 5D12, all directed against the CT of α-syn. CT α-syn antibodies attenuated synaptic and axonal pathology, reduced the accumulation of CT-truncated α-syn (CT-α-syn) in axons, rescued the loss of tyrosine hydroxylase fibers in striatum, and improved motor and memory deficits. Among them, 1H7 and 5C1 were most effective at decreasing levels of CT-α-syn and higher-molecular-weight aggregates. Furthermore, in vitro studies showed that preincubation of recombinant α-syn with 1H7 and 5C1 prevented CT cleavage of α-syn. In a cell-based system, CT antibodies reduced cell-to-cell propagation of full-length α-syn, but not of the CT-α-syn that lacked the 118-126 aa recognition site needed for antibody binding. Furthermore, the results obtained after lentiviral expression of α-syn suggest that antibodies might be blocking the extracellular truncation of α-syn by calpain-1. Together, these results demonstrate that antibodies against the CT of α-syn reduce levels of CT-truncated fragments of the protein and its propagation, thus ameliorating PD-like pathology and improving behavioral and motor functions in a mouse model of this disease. Copyright © 2014 the authors 0270-6474/14/349441-14$15.00/0.

  12. Development of RWHet to Simulate Contaminant Transport in Fractured Porous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yong; LaBolle, Eric; Reeves, Donald M

    2012-07-01

    Accurate simulation of matrix diffusion in regional-scale dual-porosity and dual-permeability media is a critical issue for the DOE Underground Test Area (UGTA) program, given the prevalence of fractured geologic media on the Nevada National Security Site (NNSS). Contaminant transport through regional-scale fractured media is typically quantified by particle-tracking based Lagrangian solvers through the inclusion of dual-domain mass transfer algorithms that probabilistically determine particle transfer between fractures and unfractured matrix blocks. UGTA applications include a wide variety of fracture aperture and spacing, effective diffusion coefficients ranging four orders of magnitude, and extreme end member retardation values. This report incorporates the currentmore » dual-domain mass transfer algorithms into the well-known particle tracking code RWHet [LaBolle, 2006], and then tests and evaluates the updated code. We also develop and test a direct numerical simulation (DNS) approach to replace the classical transfer probability method in characterizing particle dynamics across the fracture/matrix interface. The final goal of this work is to implement the algorithm identified as most efficient and effective into RWHet, so that an accurate and computationally efficient software suite can be built for dual-porosity/dual-permeability applications. RWHet is a mature Lagrangian transport simulator with a substantial user-base that has undergone significant development and model validation. In this report, we also substantially tested the capability of RWHet in simulating passive and reactive tracer transport through regional-scale, heterogeneous media. Four dual-domain mass transfer methodologies were considered in this work. We first developed the empirical transfer probability approach proposed by Liu et al. [2000], and coded it into RWHet. The particle transfer probability from one continuum to the other is proportional to the ratio of the mass entering the other continuum to the mass in the current continuum. Numerical examples show that this method is limited to certain ranges of parameters, due to an intrinsic assumption of an equilibrium concentration profile in the matrix blocks in building the transfer probability. Subsequently, this method fails in describing mass transfer for parameter combinations that violate this assumption, including small diffusion coefficients (i.e., the free-water molecular diffusion coefficient 1×10-11 meter2/second), relatively large fracture spacings (such as meter), and/or relatively large matrix retardation coefficients (i.e., ). These “outliers” in parameter range are common in UGTA applications. To address the above limitations, we then developed a Direct Numerical Simulation (DNS)-Reflective method. The novel DNS-Reflective method can directly track the particle dynamics across the fracture/matrix interface using a random walk, without any empirical assumptions. This advantage should make the DNS-Reflective method feasible for a wide range of parameters. Numerical tests of the DNS-Reflective, however, show that the method is computationally very demanding, since the time step must be very small to resolve particle transfer between fractures and matrix blocks. To improve the computational efficiency of the DNS approach, we then adopted Roubinet et al.’s method [2009], which uses first passage time distributions to simulate dual-domain mass transfer. The DNS-Roubinet method was found to be computationally more efficient than the DNS-Reflective method. It matches the analytical solution for the whole range of major parameters (including diffusion coefficient and fracture aperture values that are considered “outliers” for Liu et al.’s transfer probability method [2000]) for a single fracture system. The DNS-Roubinet method, however, has its own disadvantage: for a parallel fracture system, the truncation of the first passage time distribution creates apparent errors when the fracture spacing is small, and thus it tends to erroneously predict breakthrough curves (BTCs) for the parallel fracture system. Finally, we adopted the transient range approach proposed by Pan and Bodvarsson [2002] in RWHet. In this method, particle transfer between fractures and matrix blocks can be resolved without using very small time steps. It does not use any truncation of the first passage time distribution for particles. Hence it does not have the limitation identified above for the DNS-Reflective method and the DNS-Roubinet method. Numerical results were checked against analytical solutions, and also compared to DCPTV2.0 [Pan, 2002]. This version of RWHet (called RWHet-Pan&Bodvarsson in this report) can accurately capture contaminant transport in fractured porous media for a full range of parameters without any practical or theoretical limitations.« less

  13. Numerical study of supersonic combustors by multi-block grids with mismatched interfaces

    NASA Technical Reports Server (NTRS)

    Moon, Young J.

    1990-01-01

    A three dimensional, finite rate chemistry, Navier-Stokes code was extended to a multi-block code with mismatched interface for practical calculations of supersonic combustors. To ensure global conservation, a conservative algorithm was used for the treatment of mismatched interfaces. The extended code was checked against one test case, i.e., a generic supersonic combustor with transverse fuel injection, examining solution accuracy, convergence, and local mass flux error. After testing, the code was used to simulate the chemically reacting flow fields in a scramjet combustor with parallel fuel injectors (unswept and swept ramps). Computational results were compared with experimental shadowgraph and pressure measurements. Fuel-air mixing characteristics of the unswept and swept ramps were compared and investigated.

  14. Soft-Input Soft-Output Modules for the Construction and Distributed Iterative Decoding of Code Networks

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1998-01-01

    Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.

  15. Neural Coding Mechanisms in Gustation.

    DTIC Science & Technology

    1980-09-15

    world is composed of four primary tastes ( sweet , sour, salty , and bitter), and that each of these is carried by a separate and private neural line, thus...ted sweet -sour- salty -bitter types. The mathematical method of analysis was hierarchical cluster analysis based on the responses of many neurons (20 to...block number) Taste Neural coding Neural organization Stimulus organization Olfaction AB TRACT M~ea -i .rvm~ .1* N necffas and idmatity by block mmnbwc

  16. Investigation of upwind, multigrid, multiblock numerical schemes for three dimensional flows. Volume 1: Runge-Kutta methods for a thin layer Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Ash, Robert L.

    1992-01-01

    A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.

  17. Growth of surface and corner cracks in beta-processed and mill-annealed Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Bell, P. D.

    1975-01-01

    Empirical stress-intensity expressions were developed to relate the growth of cracks from corner flaws to the growth of cracks from surface flaws. An experimental program using beta-processed Ti-6Al-4V verified these expressions for stress ratios, R greater than or equal to 0. An empirical crack growth-rate expression which included stress-ratio and stress-level effects was also developed. Cracks grew approximately 10 percent faster in transverse-grain material than in longitudinal-grain material and at approximately the same rate in longitudinal-grain mill-annealed Ti-6Al-4V. Specimens having surface and corner cracks and made of longitudinal-grain, beta-processed material were tested with block loads, and increasing the stresses in a block did not significantly change the crack growth rates. Truncation of the basic ascending stress sequence within a block caused more rapid crack growth, whereas both the descending and low-to-high stress sequences slowed crack growth.

  18. Independent Assessment Plan: LAV-25

    DTIC Science & Technology

    1989-06-27

    Pages. Enter the total Block 7. Performing Organization Name(s) and number of pages. Address(es. Self -explanatory. Block 16. Price Code, Enter...organization Blocks 17. - 19. Security Classifications. performing the report. Self -explanatory. Enter U.S. Security Classification in accordance with U.S...Security Block 9. S oonsorina/Monitoring Acenc Regulations (i.e., UNCLASSIFIED). If form .Names(s) and Address(es). Self -explanatory. contains classified

  19. An In vitro evaluation of the reliability of QR code denture labeling technique

    PubMed Central

    Poovannan, Sindhu; Jain, Ashish R.; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R.

    2016-01-01

    Statement of Problem: Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. Aim: This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. Materials and Methods: This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. Results: The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Conclusion: Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method. PMID:28123284

  20. Area, speed and power measurements of FPGA-based complex orthogonal space-time block code channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-01-01

    Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.

  1. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  2. Biodegradable microfabricated plug-filters for glaucoma drainage devices.

    PubMed

    Maleki, Teimour; Chitnis, Girish; Park, Jun Hyeong; Cantor, Louis B; Ziaie, Babak

    2012-06-01

    We report on the development of a batch fabricated biodegradable truncated-cone-shaped plug filter to overcome the postoperative hypotony in nonvalved glaucoma drainage devices. Plug filters are composed of biodegradable polymers that disappear once wound healing and bleb formation has progressed past the stage where hypotony from overfiltration may cause complications in the human eye. The biodegradable nature of device eliminates the risks associated with permanent valves that may become blocked or influence the aqueous fluid flow rate in the long term. The plug-filter geometry simplifies its integration with commercial shunts. Aqueous humor outflow regulation is achieved by controlling the diameter of a laser-drilled through-hole. The batch compatible fabrication involves a modified SU-8 molding to achieve truncated-cone-shaped pillars, polydimethylsiloxane micromolding, and hot embossing of biodegradable polymers. The developed plug filter is 500 μm long with base and apex plane diameters of 500 and 300 μm, respectively, and incorporates a laser-drilled through-hole with 44-μm effective diameter in the center.

  3. Role of protein synthesis and DNA methylation in the consolidation and maintenance of long-term memory in Aplysia

    PubMed Central

    Pearce, Kaycey; Cai, Diancai; Roberts, Adam C; Glanzman, David L

    2017-01-01

    Previously, we reported that long-term memory (LTM) in Aplysia can be reinstated by truncated (partial) training following its disruption by reconsolidation blockade and inhibition of PKM (Chen et al., 2014). Here, we report that LTM can be induced by partial training after disruption of original consolidation by protein synthesis inhibition (PSI) begun shortly after training. But when PSI occurs during training, partial training cannot subsequently establish LTM. Furthermore, we find that inhibition of DNA methyltransferase (DNMT), whether during training or shortly afterwards, blocks consolidation of LTM and prevents its subsequent induction by truncated training; moreover, later inhibition of DNMT eliminates consolidated LTM. Thus, the consolidation of LTM depends on two functionally distinct phases of protein synthesis: an early phase that appears to prime LTM; and a later phase whose successful completion is necessary for the normal expression of LTM. Both the consolidation and maintenance of LTM depend on DNA methylation. DOI: http://dx.doi.org/10.7554/eLife.18299.001 PMID:28067617

  4. Development of a Simulink Library for the Design, Testing and Simulation of Software Defined GPS Radios. With Application to the Development of Parallel Correlator Structures

    DTIC Science & Technology

    2014-05-01

    function Value = Select_Element(Index,Signal) %# eml Value = Signal(Index); Code Listing 1 Code for Selector Block 12 | P a g e 4.3...code for the Simulink function shiftedSignal = fcn(signal,Shift) %# eml shiftedSignal = circshift(signal,Shift); Code Listing 2 Code for CircShift

  5. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  6. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  7. Comparison of Measured and Block Structured Simulations for the F-16XL Aircraft

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Elmilgui, A.; Abdol-Hamid, K. S.; Massey, S. J.

    2008-01-01

    This article presents a comparison of the predictions of three RANS codes for flight conditions of the F-16XL aircraft which feature vortical flow. The three codes, ENSOLV, PMB and PAB3D, solve on structured multi-block grids. Flight data for comparison was available in the form of surface pressures, skin friction, boundary layer data and photographs of tufts. The three codes provided predictions which were consistent with expectations based on the turbulence modelling used, which was k- , k- with vortex corrections and an Algebraic Stress Model. The agreement with flight data was good, with the exception of the outer wing primary vortex strength. The confidence in the application of the CFD codes to complex fighter configurations increased significantly through this study.

  8. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    NASA Astrophysics Data System (ADS)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  9. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  10. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  11. Chimeric classical swine fever (CSF)-Japanese encephalitis (JE) viral particles as a non-transmissible bivalent marker vaccine candidate against CSF and JE infections

    USDA-ARS?s Scientific Manuscript database

    A trans-complemented CSF- JE chimeric viral replicon was constructed using an infectious cDNA clone of the CSF virus (CSFV) Alfort/187 strain. The E2 gene of CSFV Alfort/187 strain was deleted and the resultant plasmid pA187delE2 was inserted by a fragment containing the region coding for a truncate...

  12. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  13. Approximate solutions for diffusive fracture-matrix transfer: Application to storage of dissolved CO 2 in fractured rocks

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...

    2017-01-05

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  14. Structure and biochemical functions of four simian virus 40 truncated large-T antigens.

    PubMed Central

    Chaudry, F; Harvey, R; Smith, A E

    1982-01-01

    The structure of four abnormal T antigens which are present in different simian virus 40 (SV40)-transformed mouse cell lines was studied by tryptic peptide mapping, partial proteolysis fingerprinting, immunoprecipitation with monoclonal antibodies, and in vitro translation. The results obtained allowed us to deduce that these proteins, which have apparent molecular weights of 15,000, 22,000, 33,000 and 45,000, are truncated forms of large-T antigen extending to different amounts into the amino acid sequences unique to large-T. The proteins are all phosphorylated, probably at a site between amino acids 106 and 123. The mRNAs coding for the proteins probably contain the normal large-T splice but are shorter than the normal transcripts of the SV40 early region. The truncated large-Ts were tested for the ability to bind to double-stranded DNA-cellulose. This showed that the 33,000- and 45,000-molecular-weight polypeptides contained sequences sufficient for binding under the conditions used, whereas the 15,000- and 22,000-molecular-weight forms did not. Together with published data, this allows the tentative mapping of a region of SV40 large-T between amino acids 109 and 272 that is necessary and may be sufficient for the binding to double-stranded DNA-cellulose in vitro. None of the truncated large-T species formed a stable complex with the host cell protein referred to as nonviral T-antigen or p53, suggesting that the carboxy-terminal sequences of large-T are necessary for complex formation. Images PMID:6292504

  15. [Effect of N-terminal truncation of Bacillus acidopullulyticus pullulanase on enzyme properties and functions].

    PubMed

    Chen, A'na; Liu, Xiuxia; Dai, Xiaofeng; Zhan, Jinling; Peng, Feng; Li, Lu; Wang, Fen; Li, Song; Yang, Yankun; Bai, Zhonghu

    2016-03-01

    We constructed different N-terminal truncated variants based on Bacillus acidopullulyticus pullulanase 3D structure (PDB code 2WAN), and studied the effects of truncated mutation on soluble expression, enzymatic properties, and application in saccharification. Upon expression, the variants of X45 domain deletion existed as inclusion bodies, whereas deletion of CBM41 domain had an effective effect on soluble expression level. The variants that lack of CBM41 (M1), lack of X25 (M3), and lack both of CBM41 and X25 (M5) had the same optimal pH (5.0) and optimal temperature (60 degrees C) with the wild-type pullulanase (WT). The K(m) of M1 and M5 were 1.42 mg/mL and 1.85 mg/mL, respectively, 2.4- and 3.1-fold higher than that of the WT. k(cat)/K(m) value of M5 was 40% lower than that of the WT. Substrate specificity results show that the enzymes exhibited greater activity with the low-molecular-weight dextrin than with high-molecular-weight soluble starch. When pullulanases were added to the saccharification reaction system, the dextrose equivalent of the WT, M1, M3, and M5 were 93.6%, 94.7%, 94.5%, and93.1%, respectively. These results indicate that the deletion of CBM41 domain and/or X25 domain did not affect the practical application in starch saccharification process. Furthermore, low-molecular-weight variants facilitate the heterologous expression. Truncated variants may be more suitable for industrial production than the WT.

  16. Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies

    PubMed Central

    Salter, Claire G.; Beijer, Danique; Hardy, Holly; Barwick, Katy E.S.; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A.; McEntagart, Meriel M.; Chioza, Barry A.; Blakely, Randy D.; Chilton, John K.; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L.

    2018-01-01

    Objective To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). Methods The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. Results dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7, predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. Conclusions This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction. PMID:29582019

  17. Protecting quantum memories using coherent parity check codes

    NASA Astrophysics Data System (ADS)

    Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv

    2018-07-01

    Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2

  18. UNIPIC code for simulations of high power microwave devices

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze

    2009-03-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  19. Method for rapid high-frequency seismogram calculation

    NASA Astrophysics Data System (ADS)

    Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo

    2009-02-01

    We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).

  20. tRNA acceptor-stem and anticodon bases embed separate features of amino acid chemistry

    PubMed Central

    Carter, Charles W.; Wolfenden, Richard

    2016-01-01

    abstract The universal genetic code is a translation table by which nucleic acid sequences can be interpreted as polypeptides with a wide range of biological functions. That information is used by aminoacyl-tRNA synthetases to translate the code. Moreover, amino acid properties dictate protein folding. We recently reported that digital correlation techniques could identify patterns in tRNA identity elements that govern recognition by synthetases. Our analysis, and the functionality of truncated synthetases that cannot recognize the tRNA anticodon, support the conclusion that the tRNA acceptor stem houses an independent code for the same 20 amino acids that likely functioned earlier in the emergence of genetics. The acceptor-stem code, related to amino acid size, is distinct from a code in the anticodon that is related to amino acid polarity. Details of the acceptor-stem code suggest that it was useful in preserving key properties of stereochemically-encoded peptides that had developed the capacity to interact catalytically with RNA. The quantitative embedding of the chemical properties of amino acids into tRNA bases has implications for the origins of molecular biology. PMID:26595350

  1. Doughnut strikes sandwich: the geometry of hot medium in accreting black hole X-ray binaries

    NASA Astrophysics Data System (ADS)

    Poutanen, Juri; Veledina, Alexandra; Zdziarski, Andrzej A.

    2018-06-01

    We study the effects of the mutual interaction of hot plasma and cold medium in black hole binaries in their hard spectral state. We consider a number of different geometries. In contrast to previous theoretical studies, we use a modern energy-conserving code for reflection and reprocessing from cold media. We show that a static corona above an accretion disc extending to the innermost stable circular orbit produces spectra not compatible with those observed. They are either too soft or require a much higher disc ionization than that observed. This conclusion confirms a number of previous findings, but disproves a recent study claiming an agreement of that model with observations. We show that the cold disc has to be truncated in order to agree with the observed spectral hardness. However, a cold disc truncated at a large radius and replaced by a hot flow produces spectra which are too hard if the only source of seed photons for Comptonization is the accretion disc. Our favourable geometry is a truncated disc coexisting with a hot plasma either overlapping with the disc or containing some cold matter within it, also including seed photons arising from cyclo-synchrotron emission of hybrid electrons, i.e. containing both thermal and non-thermal parts.

  2. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  3. Minimal Increase Network Coding for Dynamic Networks.

    PubMed

    Zhang, Guoyin; Fan, Xu; Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.

  4. Minimal Increase Network Coding for Dynamic Networks

    PubMed Central

    Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211

  5. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  6. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

  7. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  8. Characterization of the Pathological and Biochemical Markers that Correlate to the Clinical Features of Autism

    DTIC Science & Technology

    2011-10-01

    Iversen et al 1995, Selkoe 2001); and AβpE3 as a product of N-terminal truncation of full length Aβ peptide by aminopeptidase A and pyroglutamate ...formic acid for 20 min (Kitamoto et al 1987). The endogenous peroxidase in the sections was blocked with 0.2% hydrogen peroxide in methanol. The sections...70% formic acid for 20 minutes, washed in PBS 2x 10 min and double immunostained using mAb 4G8 and lysosomal marker cathepsin D (Calbiochem) or a

  9. Multiplex N-terminome analysis of MMP-2 and MMP-9 substrate degradomes by iTRAQ-TAILS quantitative proteomics.

    PubMed

    Prudova, Anna; auf dem Keller, Ulrich; Butler, Georgina S; Overall, Christopher M

    2010-05-01

    Proteolysis is a major protein posttranslational modification that, by altering protein structure, affects protein function and, by truncating the protein sequence, alters peptide signatures of proteins analyzed by proteomics. To identify such modified and shortened protease-generated neo-N-termini on a proteome-wide basis, we developed a whole protein isobaric tag for relative and absolute quantitation (iTRAQ) labeling method that simultaneously labels and blocks all primary amines including protein N- termini and lysine side chains. Blocking lysines limits trypsin cleavage to arginine, which effectively elongates the proteolytically truncated peptides for improved MS/MS analysis and peptide identification. Incorporating iTRAQ whole protein labeling with terminal amine isotopic labeling of substrates (iTRAQ-TAILS) to enrich the N-terminome by negative selection of the blocked mature original N-termini and neo-N-termini has many advantages. It enables simultaneous characterization of the natural N-termini of proteins, their N-terminal modifications, and proteolysis product and cleavage site identification. Furthermore, iTRAQ-TAILS also enables multiplex N-terminomics analysis of up to eight samples and allows for quantification in MS2 mode, thus preventing an increase in spectral complexity and extending proteome coverage by signal amplification of low abundance proteins. We compared the substrate degradomes of two closely related matrix metalloproteinases, MMP-2 (gelatinase A) and MMP-9 (gelatinase B), in fibroblast secreted proteins. Among 3,152 unique N-terminal peptides identified corresponding to 1,054 proteins, we detected 201 cleavage products for MMP-2 and unexpectedly only 19 for the homologous MMP-9 under identical conditions. Novel substrates identified and biochemically validated include insulin-like growth factor binding protein-4, complement C1r component A, galectin-1, dickkopf-related protein-3, and thrombospondin-2. Hence, N-terminomics analyses using iTRAQ-TAILS links gelatinases with new mechanisms of action in angiogenesis and reveals unpredicted restrictions in substrate repertoires for these two very similar proteases.

  10. Multiplex N-terminome Analysis of MMP-2 and MMP-9 Substrate Degradomes by iTRAQ-TAILS Quantitative Proteomics*

    PubMed Central

    Prudova, Anna; auf dem Keller, Ulrich; Butler, Georgina S.; Overall, Christopher M.

    2010-01-01

    Proteolysis is a major protein posttranslational modification that, by altering protein structure, affects protein function and, by truncating the protein sequence, alters peptide signatures of proteins analyzed by proteomics. To identify such modified and shortened protease-generated neo-N-termini on a proteome-wide basis, we developed a whole protein isobaric tag for relative and absolute quantitation (iTRAQ) labeling method that simultaneously labels and blocks all primary amines including protein N- termini and lysine side chains. Blocking lysines limits trypsin cleavage to arginine, which effectively elongates the proteolytically truncated peptides for improved MS/MS analysis and peptide identification. Incorporating iTRAQ whole protein labeling with terminal amine isotopic labeling of substrates (iTRAQ-TAILS) to enrich the N-terminome by negative selection of the blocked mature original N-termini and neo-N-termini has many advantages. It enables simultaneous characterization of the natural N-termini of proteins, their N-terminal modifications, and proteolysis product and cleavage site identification. Furthermore, iTRAQ-TAILS also enables multiplex N-terminomics analysis of up to eight samples and allows for quantification in MS2 mode, thus preventing an increase in spectral complexity and extending proteome coverage by signal amplification of low abundance proteins. We compared the substrate degradomes of two closely related matrix metalloproteinases, MMP-2 (gelatinase A) and MMP-9 (gelatinase B), in fibroblast secreted proteins. Among 3,152 unique N-terminal peptides identified corresponding to 1,054 proteins, we detected 201 cleavage products for MMP-2 and unexpectedly only 19 for the homologous MMP-9 under identical conditions. Novel substrates identified and biochemically validated include insulin-like growth factor binding protein-4, complement C1r component A, galectin-1, dickkopf-related protein-3, and thrombospondin-2. Hence, N-terminomics analyses using iTRAQ-TAILS links gelatinases with new mechanisms of action in angiogenesis and reveals unpredicted restrictions in substrate repertoires for these two very similar proteases. PMID:20305284

  11. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less

  12. Comparative Genomics of a Parthenogenesis-Inducing Wolbachia Symbiont

    PubMed Central

    Lindsey, Amelia R. I.; Werren, John H.; Richards, Stephen; Stouthamer, Richard

    2016-01-01

    Wolbachia is an intracellular symbiont of invertebrates responsible for inducing a wide variety of phenotypes in its host. These host-Wolbachia relationships span the continuum from reproductive parasitism to obligate mutualism, and provide a unique system to study genomic changes associated with the evolution of symbiosis. We present the genome sequence from a parthenogenesis-inducing Wolbachia strain (wTpre) infecting the minute parasitoid wasp Trichogramma pretiosum. The wTpre genome is the most complete parthenogenesis-inducing Wolbachia genome available to date. We used comparative genomics across 16 Wolbachia strains, representing five supergroups, to identify a core Wolbachia genome of 496 sets of orthologous genes. Only 14 of these sets are unique to Wolbachia when compared to other bacteria from the Rickettsiales. We show that the B supergroup of Wolbachia, of which wTpre is a member, contains a significantly higher number of ankyrin repeat-containing genes than other supergroups. In the wTpre genome, there is evidence for truncation of the protein coding sequences in 20% of ORFs, mostly as a result of frameshift mutations. The wTpre strain represents a conversion from cytoplasmic incompatibility to a parthenogenesis-inducing lifestyle, and is required for reproduction in the Trichogramma host it infects. We hypothesize that the large number of coding frame truncations has accompanied the change in reproductive mode of the wTpre strain. PMID:27194801

  13. Comparative Genomics of a Parthenogenesis-Inducing Wolbachia Symbiont.

    PubMed

    Lindsey, Amelia R I; Werren, John H; Richards, Stephen; Stouthamer, Richard

    2016-07-07

    Wolbachia is an intracellular symbiont of invertebrates responsible for inducing a wide variety of phenotypes in its host. These host-Wolbachia relationships span the continuum from reproductive parasitism to obligate mutualism, and provide a unique system to study genomic changes associated with the evolution of symbiosis. We present the genome sequence from a parthenogenesis-inducing Wolbachia strain (wTpre) infecting the minute parasitoid wasp Trichogramma pretiosum The wTpre genome is the most complete parthenogenesis-inducing Wolbachia genome available to date. We used comparative genomics across 16 Wolbachia strains, representing five supergroups, to identify a core Wolbachia genome of 496 sets of orthologous genes. Only 14 of these sets are unique to Wolbachia when compared to other bacteria from the Rickettsiales. We show that the B supergroup of Wolbachia, of which wTpre is a member, contains a significantly higher number of ankyrin repeat-containing genes than other supergroups. In the wTpre genome, there is evidence for truncation of the protein coding sequences in 20% of ORFs, mostly as a result of frameshift mutations. The wTpre strain represents a conversion from cytoplasmic incompatibility to a parthenogenesis-inducing lifestyle, and is required for reproduction in the Trichogramma host it infects. We hypothesize that the large number of coding frame truncations has accompanied the change in reproductive mode of the wTpre strain. Copyright © 2016 Lindsey et al.

  14. Overexpression of c-jun, junB, or junD affects cell growth differently.

    PubMed

    Castellazzi, M; Spyrou, G; La Vista, N; Dangy, J P; Piu, F; Yaniv, M; Brun, G

    1991-10-15

    The coding sequences of murine c-jun, junB, or junD, which code for proteins with practically identical dimerization and DNA binding properties, were introduced into a nondefective retroviral vector, and the phenotype of primary avian fibroblasts chronically infected with each of these viruses was studied. Cells expressing c-jun grew in low-serum medium and developed into colonies in agar, two properties characteristic of in vitro transformation. Cells expressing junB grew in agar, with a reduced efficiency as compared to c-jun, but did not grow in low-serum medium. Finally, no effect of junD expression on cell growth was observed. These different phenotypes suggest that these three closely related transcription factors play distinct roles during normal cell growth. Analysis of c-jun deletion mutants and of c-jun/junB and c-jun/junD chimeric genes showed that the N-terminal portion (amino acids 2-168) of the c-Jun protein that is involved in transcriptional activation is required for efficient transformation. On the contrary, cells expressing a truncated mouse c-Jun lacking this N-terminal domain grew slower than normal embryo fibroblasts. The reduced growth rate may be related to the finding that expression of the intact or the truncated mouse c-jun repressed the endogenous avian c-Jun homologue, suggesting that functional c-Jun product is required for normal cell growth.

  15. Overexpression of c-jun, junB, or junD affects cell growth differently.

    PubMed Central

    Castellazzi, M; Spyrou, G; La Vista, N; Dangy, J P; Piu, F; Yaniv, M; Brun, G

    1991-01-01

    The coding sequences of murine c-jun, junB, or junD, which code for proteins with practically identical dimerization and DNA binding properties, were introduced into a nondefective retroviral vector, and the phenotype of primary avian fibroblasts chronically infected with each of these viruses was studied. Cells expressing c-jun grew in low-serum medium and developed into colonies in agar, two properties characteristic of in vitro transformation. Cells expressing junB grew in agar, with a reduced efficiency as compared to c-jun, but did not grow in low-serum medium. Finally, no effect of junD expression on cell growth was observed. These different phenotypes suggest that these three closely related transcription factors play distinct roles during normal cell growth. Analysis of c-jun deletion mutants and of c-jun/junB and c-jun/junD chimeric genes showed that the N-terminal portion (amino acids 2-168) of the c-Jun protein that is involved in transcriptional activation is required for efficient transformation. On the contrary, cells expressing a truncated mouse c-Jun lacking this N-terminal domain grew slower than normal embryo fibroblasts. The reduced growth rate may be related to the finding that expression of the intact or the truncated mouse c-jun repressed the endogenous avian c-Jun homologue, suggesting that functional c-Jun product is required for normal cell growth. Images PMID:1924349

  16. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  17. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Holst, B.; Toth, G.; Sokolov, I. V.

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less

  18. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  19. The Gift Code User Manual. Volume I. Introduction and Input Requirements

    DTIC Science & Technology

    1975-07-01

    REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called

  20. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  1. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  2. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  3. Maternal positioning affects fetal heart rate changes after epidural analgesia for labour.

    PubMed

    Preston, R; Crosby, E T; Kotarba, D; Dudas, H; Elliott, R D

    1993-12-01

    Adverse fetal heart rate (FHR) changes suggestive of fetal hypoxia are seen in patients with normal term pregnancies after initiation of epidural block for labour analgesia. It was our hypothesis that, in some parturients, these changes were a consequence of concealed aortocaval compression resulting in decreased uterine blood flow. We expected that the full lateral position compared with the wedged supine position would provide more effective prophylaxis against aortocaval compression. To test our hypothesis we studied the role of maternal positioning on FHR changes during onset of epidural analgesia for labour. Eighty-eight ASA Class I or II term parturients were randomized into two groups: those to be nursed in the wedged supine position and those to be nursed in the full lateral position during induction of an epidural block. External FHR monitoring was employed to assess the fetal response to initiation of labour epidural analgesia. Epidural catheters were sited with the parturients in the sitting position and the patients then assumed the study position. After a negative test dose, a standardized regimen of bupivacaine 0.25% was employed to provide labour analgesia. The quality and efficacy of the block were assessed using VAS pain scores, motor block scores and sensory levels. The results demonstrated that there was no difference in the quality of analgesia provided nor in the incidence of asymmetric blocks. There was no difference in the observed incidence of FHR changes occurring during the initiation of the epidural block.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Cocaethylene, a metabolite of cocaine and ethanol, is a potent blocker of cardiac sodium channels.

    PubMed

    Xu, Y Q; Crumb, W J; Clarkson, C W

    1994-10-01

    Cocaethylene is an active metabolite of cocaine believed to play a causative role in the increased incidence of sudden death in individuals who coadminister ethanol with cocaine. However, the direct effects of cocaethylene on the heart have not been well defined. In this study, we defined the effects of cocaethylene on the cardiac Na current (INa) in guinea pig ventricular myocytes at 16 degrees C using the whole-cell patch-clamp method. Cocaethylene (10-50 microM) produced both a significant tonic block and a rate-dependent block of INa at cycle lengths between 2 and 0.2 sec. Cocaethylene produced a significantly greater tonic block than cocaine at a concentration of 50 microM and produced a significantly greater use-dependent block over a 5-fold range of drug concentrations (10-50 microM) and cycle lengths (0.2-1.0 sec). Analysis of channel-blocking characteristics revealed that cocaethylene had a significantly higher affinity for inactivated channels (Kdi = 5.1 +/- 0.6 microM, n = 15) compared with cocaine (Kdi = 7.9 +/- 0.5 microM, n = 10) (P < .01) and that cocaethylene produced a significantly greater hyperpolarizing shift of the steady-state INa inactivation curve (P < .05). Cocaethylene also had a significantly longer time constant for recovery from channel block at -140 mV (12.24 +/- 0.88 sec, n = 16) compared with cocaine (8.33 +/- 0.56 sec, n = 14) (P < .01).(ABSTRACT TRUNCATED AT 250 WORDS)

  5. Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.

    1980-01-01

    Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.

  6. A combinatorial code for pattern formation in Drosophila oogenesis.

    PubMed

    Yakoby, Nir; Bristow, Christopher A; Gong, Danielle; Schafer, Xenia; Lembong, Jessica; Zartman, Jeremiah J; Halfon, Marc S; Schüpbach, Trudi; Shvartsman, Stanislav Y

    2008-11-01

    Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggests that they follow a simple combinatorial code based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of inductive signals, provided by the highly conserved epidermal growth factor receptor and bone morphogenetic protein signaling pathways. We demonstrate the validity of the code by testing it against a set of patterns obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguish 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterize their joint dynamics over four stages of oogenesis. The proposed combinatorial framework allows systematic analysis of the diversity and dynamics of two-dimensional transcriptional patterns and guides future studies of gene regulation.

  7. An installed nacelle design code using a multiblock Euler solver. Volume 2: User guide

    NASA Technical Reports Server (NTRS)

    Chen, H. C.

    1992-01-01

    This is a user manual for the general multiblock Euler design (GMBEDS) code. The code is for the design of a nacelle installed on a geometrically complex configuration such as a complete airplane with wing/body/nacelle/pylon. It consists of two major building blocks: a design module developed by LaRC using directive iterative surface curvature (DISC); and a general multiblock Euler (GMBE) flow solver. The flow field surrounding a complex configuration is divided into a number of topologically simple blocks to facilitate surface-fitted grid generation and improve flow solution efficiency. This user guide provides input data formats along with examples of input files and a Unix script for program execution in the UNICOS environment.

  8. Fluorescence Lifetime Study of Cyclodextrin Complexes of Substituted Naphthalenes.

    DTIC Science & Technology

    1987-08-15

    Spectroscopy iip 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse If necessary and identify by block number) FIELD GROUP SUB-GROUP fluorescence lifetime...measurements cyclodextrins spectroscopic techniques 19. TRACT (Continue on revere if necsary and identify by block number

  9. Sequential Prediction of Literacy Achievement for Specific Learning Disabilities Contrasting in Impaired Levels of Language in Grades 4 to 9

    PubMed Central

    Sanders, Elizabeth A.; Berninger, Virginia W.; Abbott, Robert D.

    2017-01-01

    Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in grades 4–9 (N=103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n=25), word reading and spelling (dyslexia, n=60), or oral and written language (OWL LD, n=18). That is, SLDs are defined on basis of cascading level of language impairment (subword, word, and syntax/text). A 5-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% variance. Orthographic word form coding uniquely predicted nearly every measure, whereas attention switching only uniquely predicted reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and OWL LD were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed. PMID:28199175

  10. Successive increases in the resistance of Drosophila to viral infection through a transposon insertion followed by a Duplication.

    PubMed

    Magwire, Michael M; Bayer, Florian; Webster, Claire L; Cao, Chuan; Jiggins, Francis M

    2011-10-01

    To understand the molecular basis of how hosts evolve resistance to their parasites, we have investigated the genes that cause variation in the susceptibility of Drosophila melanogaster to viral infection. Using a host-specific pathogen of D. melanogaster called the sigma virus (Rhabdoviridae), we mapped a major-effect polymorphism to a region containing two paralogous genes called CHKov1 and CHKov2. In a panel of inbred fly lines, we found that a transposable element insertion in the protein coding sequence of CHKov1 is associated with increased resistance to infection. Previous research has shown that this insertion results in a truncated messenger RNA that encodes a far shorter protein than the susceptible allele. This resistant allele has rapidly increased in frequency under directional selection and is now the commonest form of the gene in natural populations. Using genetic mapping and site-specific recombination, we identified a third genotype with considerably greater resistance that is currently rare in the wild. In these flies there have been two duplications, resulting in three copies of both the truncated allele of CHKov1 and CHKov2 (one of which is also truncated). Remarkably, the truncated allele of CHKov1 has previously been found to confer resistance to organophosphate insecticides. As estimates of the age of this allele predate the use of insecticides, it is likely that this allele initially functioned as a defence against viruses and fortuitously "pre-adapted" flies to insecticides. These results demonstrate that strong selection by parasites for increased host resistance can result in major genetic changes and rapid shifts in allele frequencies; and, contrary to the prevailing view that resistance to pathogens can be a costly trait to evolve, the pleiotropic effects of these changes can have unexpected benefits.

  11. Histone deacetylase-related protein inhibits AES-mediated neuronal cell death by direct interaction.

    PubMed

    Zhang, Xiaoguang; Chen, Hsin-Mei; Jaramillo, Eduardo; Wang, Lulu; D'Mello, Santosh R

    2008-08-15

    Histone deacetylase-related protein (HDRP), an alternatively spliced and truncated form of histone deacetylase-9 that lacks a C-terminal catalytic domain, protects neurons from death. In an effort to understand the mechanism by which HDRP mediates its neuroprotective effect, we screened for proteins in the brain that interact with HDRP by using a yeast two-hybrid assay. One of the HDRP-interacting proteins identified in this screen was amino enhancer of split (AES), a 197-amino acid protein belonging to the Groucho family. Interaction between HDRP and AES was verified by in vitro binding assays, coimmunoprecipitation, and colocalization studies. To investigate the significance of the HDRP-AES association to the regulation of neuronal survival, we used cultured cerebellar granule neurons, which undergo apoptosis when treated with low potassium (LK) medium. We found that in contrast to HDRP, whose expression is markedly reduced by LK treatment, AES expression was not appreciably altered. Forced expression of AES in healthy neurons results in cell death, an action that is blocked by the coexpression of HDRP. AES is a truncated version of larger Groucho-related proteins, one of which is transducin-like enhancer of split (TLE)-1. We found that the expression of TLE1 is reduced in LK-treated neurons and the forced expression of TLE1 blocks LK-induced neuronal death as well as death induced by AES. Our results show that AES has apoptotic activity in neurons and suggest that neuroprotection by HDRP is mediated by the inhibition of this activity through direct interaction.

  12. Truncation of the human immunodeficiency virus type 1 transmembrane glycoprotein cytoplasmic domain blocks virus infectivity.

    PubMed Central

    Dubay, J W; Roberts, S J; Hahn, B H; Hunter, E

    1992-01-01

    Human immunodeficiency virus type 1 contains a transmembrane glycoprotein with an unusually long cytoplasmic domain. To determine the role of this domain in virus replication, a series of single nucleotide changes that result in the insertion of premature termination codons throughout the cytoplasmic domain has been constructed. These mutations delete from 6 to 192 amino acids from the carboxy terminus of gp41 and do not affect the amino acid sequence of the regulatory proteins encoded by rev and tat. The effects of these mutations on glycoprotein biosynthesis and function as well as on virus infectivity have been examined in the context of a glycoprotein expression vector and the viral genome. All of the mutant glycoproteins were synthesized, processed, and transported to the cell surface in a manner similar to that of the wild-type glycoprotein. With the exception of mutants that remove the membrane anchor domain, all of the mutant glycoproteins retained the ability to cause fusion of CD4-bearing cells. However, deletion of more than 19 amino acids from the C terminus of gp41 blocked the ability of mutant virions to infect cells. This defect in virus infectivity appeared to be due at least in part to a failure of the virus to efficiently incorporate the truncated glycoprotein. Similar data were obtained for mutations in two different env genes and two different target cell lines. These results indicate that the cytoplasmic domain of gp41 plays a critical role during virus assembly and entry in the life cycle of human immunodeficiency virus type 1. Images PMID:1357190

  13. [Indications and possibilities of blockade of the sympathetic nerve].

    PubMed

    Meyer, J

    1987-04-01

    Treatment of chronic pain through permanent or temporary interruption of sympathetic activity is marked by great clinical success, but nevertheless there are rather skeptical reports about long-term results of these blocks as therapeutic measures. There are many symptoms and signs of chronic pain, while diagnosis is expensive, the pathogenesis is complex, and the etiology is generally due to multiple factors. Indications for sympathetic blockade depend upon the possible means of access, as in the cervicothoracic, thoracic, lumbar, or sacral regions. General indications are: symptoms not limited segmentally within peripheral body areas; pain resulting from microtraumata and lesions of peripheral nerve branches; and pain caused by intensified sympathetic tone with consequent circulatory disturbances. Peripheral circulatory disturbances are the most common indication for sympathetic blockade, as the block produces a vasomotor reaction that leads to increased capillary circulation. Pain caused by herpes zoster, sudden hearing loss, hyperhidrosis, and pseudesthesia can also be influenced by sympathetic blockade. There are several possibilities for reducing or interrupting sympathetic activity; for us, however, blocking of the sympathetic trunk is the most important. During the last 16 years we performed 15,726 sympathetic blockades on 2385 patients, which included: 3735 stellate ganglion blocks, 6121 blocks of the lumbar sympathetic trunk, 5037 continuous peridural anesthesias, 29 blocks of the thoracic sympathetic trunk, and 12 celiac blocks. In 792 cases sympathetic blocks were performed using neurolytic drugs, in most cases 96% ethyl alcohol and less often 10% ammonium sulphate. Other possibilities, such as enteral administration or infusion of sympatholytic drugs, were not taken into consideration; regional intravascular injection of guanethidine can be recommended, however.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  15. The finite scaling for S = 1 XXZ chains with uniaxial single-ion-type anisotropy

    NASA Astrophysics Data System (ADS)

    Wang, Honglei; Xiong, Xingliang

    2014-03-01

    The scaling behavior of criticality for spin-1 XXZ chains with uniaxial single-ion-type anisotropy is investigated by employing the infinite matrix product state representation with the infinite time evolving block decimation method. At criticality, the accuracy of the ground state of a system is limited by the truncation dimension χ of the local Hilbert space. We present four evidences for the scaling of the entanglement entropy, the largest eigenvalue of the Schmidt decomposition, the correlation length, and the connection between the actual correlation length ξ and the energy. The result shows that the finite scalings are governed by the central charge of the critical system. Also, it demonstrates that the infinite time evolving block decimation algorithm by the infinite matrix product state representation can be a quite accurate method to simulate the critical properties at criticality.

  16. Molecular evolution of pentatricopeptide repeat genes reveals truncation in species lacking an editing target and structural domains under distinct selective pressures.

    PubMed

    Hayes, Michael L; Giang, Karolyn; Mulligan, R Michael

    2012-05-14

    Pentatricopeptide repeat (PPR) proteins are required for numerous RNA processing events in plant organelles including C-to-U editing, splicing, stabilization, and cleavage. Fifteen PPR proteins are known to be required for RNA editing at 21 sites in Arabidopsis chloroplasts, and belong to the PLS class of PPR proteins. In this study, we investigate the co-evolution of four PPR genes (CRR4, CRR21, CLB19, and OTP82) and their six editing targets in Brassicaceae species. PPR genes are composed of approximately 10 to 20 tandem repeats and each repeat has two α-helical regions, helix A and helix B, that are separated by short coil regions. Each repeat and structural feature was examined to determine the selective pressures on these regions. All of the PPR genes examined are under strong negative selection. Multiple independent losses of editing site targets are observed for both CRR21 and OTP82. In several species lacking the known editing target for CRR21, PPR genes are truncated near the 17th PPR repeat. The coding sequences of the truncated CRR21 genes are maintained under strong negative selection; however, the 3' UTR sequences beyond the truncation site have substantially diverged. Phylogenetic analyses of four PPR genes show that sequences corresponding to helix A are high compared to helix B sequences. Differential evolutionary selection of helix A versus helix B is observed in both plant and mammalian PPR genes. PPR genes and their cognate editing sites are mutually constrained in evolution. Editing sites are frequently lost by replacement of an edited C with a genomic T. After the loss of an editing site, the PPR genes are observed with three outcomes: first, few changes are detected in some cases; second, the PPR gene is present as a pseudogene; and third, the PPR gene is present but truncated in the C-terminal region. The retention of truncated forms of CRR21 that are maintained under strong negative selection even in the absence of an editing site target suggests that unrecognized function(s) might exist for this PPR protein. PPR gene sequences that encode helix A are under strong selection, and could be involved in RNA substrate recognition.

  17. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  18. Utilization of Patch/Triangular Target Description Data in BRL Parallel Ray Vulnerability Assessment Codes

    DTIC Science & Technology

    1979-09-01

    KEY WORDS (Continue on revmrem elde It necmmemry and Identity by block number) Target Descriptions GIFT Code C0MGE0M Descriptions FASTGEN Code...which accepts the COMGEOM target description and 1 2 produces the shotline data is the GIFT ’ code. The GIFT code evolved 3 4 from and has...the COMGEOM/ GIFT methodology, while the Navy and Air Force use the PATCH/SHOTGEN-FASTGEN methodology. Lawrence W. Bain, Mathew J. Heisinger

  19. Evaluation of three coding schemes designed for improved data communication

    NASA Technical Reports Server (NTRS)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  20. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  1. Epigenetic remodelling and dysregulation of DLGAP4 is linked with early-onset cerebellar ataxia

    PubMed Central

    Minocherhomji, Sheroy; Hansen, Claus; Kim, Hyung-Goo; Mang, Yuan; Bak, Mads; Guldberg, Per; Papadopoulos, Nickolas; Eiberg, Hans; Doh, Gerald Dayebga; Møllgård, Kjeld; Hertz, Jens Michael; Nielsen, Jørgen E.; Ropers, Hans-Hilger; Tümer, Zeynep; Tommerup, Niels; Kalscheuer, Vera M.; Silahtaroglu, Asli

    2014-01-01

    Genome instability, epigenetic remodelling and structural chromosomal rearrangements are hallmarks of cancer. However, the coordinated epigenetic effects of constitutional chromosomal rearrangements that disrupt genes associated with congenital neurodevelopmental diseases are poorly understood. To understand the genetic–epigenetic interplay at breakpoints of chromosomal translocations disrupting CG-rich loci, we quantified epigenetic modifications at DLGAP4 (SAPAP4), a key post-synaptic density 95 (PSD95) associated gene, truncated by the chromosome translocation t(8;20)(p12;q11.23), co-segregating with cerebellar ataxia in a five-generation family. We report significant epigenetic remodelling of the DLGAP4 locus triggered by the t(8;20)(p12;q11.23) translocation and leading to dysregulation of DLGAP4 expression in affected carriers. Disruption of DLGAP4 results in monoallelic hypermethylation of the truncated DLGAP4 promoter CpG island. This induced hypermethylation is maintained in somatic cells of carriers across several generations in a t(8;20) dependent-manner however, is erased in the germ cells of the translocation carriers. Subsequently, chromatin remodelling of the locus-perturbed monoallelic expression of DLGAP4 mRNAs and non-coding RNAs in haploid cells having the translocation. Our results provide new mechanistic insight into the way a balanced chromosomal rearrangement associated with a neurodevelopmental disorder perturbs allele-specific epigenetic mechanisms at breakpoints leading to the deregulation of the truncated locus. PMID:24986922

  2. Murine c-mpl: a member of the hematopoietic growth factor receptor superfamily that transduces a proliferative signal.

    PubMed Central

    Skoda, R C; Seldin, D C; Chiang, M K; Peichel, C L; Vogt, T F; Leder, P

    1993-01-01

    The murine myeloproliferative leukemia virus has previously been shown to contain a fragment of the coding region of the c-mpl gene, a member of the cytokine receptor superfamily. We have isolated cDNA and genomic clones encoding murine c-mpl and localized the c-mpl gene to mouse chromosome 4. Since some members of this superfamily function by transducing a proliferative signal and since the putative ligand of mpl is unknown, we have generated a chimeric receptor to test the functional potential of mpl. The chimera consists of the extracellular domain of the human interleukin-4 receptor and the cytoplasmic domain of mpl. A mouse hematopoietic cell line transfected with this construct proliferates in response to human interleukin-4, thereby demonstrating that the cytoplasmic domain of mpl contains all elements necessary to transmit a growth stimulatory signal. In addition, we show that 25-40% of mpl mRNA found in the spleen corresponds to a novel truncated and potentially soluble isoform of mpl and that both full-length and truncated forms of mpl protein can be immunoprecipitated from lysates of transfected COS cells. Interestingly, however, although the truncated form of the receptor possesses a functional signal sequence and lacks a transmembrane domain, it is not detected in the culture media of transfected cells. Images PMID:8334987

  3. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  4. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  5. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  6. Light Infantry in the Defense of Urban Europe.

    DTIC Science & Technology

    1986-12-14

    if applicable) 6c. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Leavenworth, Kansas 66027-6900 Ba. NAME OF FUNDING...SPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) Sc. ADDRESS (City, State, and ZIP Code ) 10...PAGE COUNT wo - EFROM TO144 16. SUPPLEMENTARY NOTATION 17. COSATI CODES A*SUBJECT TERMS (Continue on reverse if necessary and identify by block

  7. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  8. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  9. Saturation of recognition elements blocks evolution of new tRNA identities

    PubMed Central

    Saint-Léger, Adélaïde; Bello, Carla; Dans, Pablo D.; Torres, Adrian Gabriel; Novoa, Eva Maria; Camacho, Noelia; Orozco, Modesto; Kondrashov, Fyodor A.; Ribas de Pouplana, Lluís

    2016-01-01

    Understanding the principles that led to the current complexity of the genetic code is a central question in evolution. Expansion of the genetic code required the selection of new transfer RNAs (tRNAs) with specific recognition signals that allowed them to be matured, modified, aminoacylated, and processed by the ribosome without compromising the fidelity or efficiency of protein synthesis. We show that saturation of recognition signals blocks the emergence of new tRNA identities and that the rate of nucleotide substitutions in tRNAs is higher in species with fewer tRNA genes. We propose that the growth of the genetic code stalled because a limit was reached in the number of identity elements that can be effectively used in the tRNA structure. PMID:27386510

  10. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  11. ADPAC v1.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.

    1999-01-01

    The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.

  12. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  13. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  14. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  15. Enhancements to the GRIDGEN structured grid generation system for internal and external flow applications

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Chawner, John R.

    1992-01-01

    GRIDGEN is a government domain software package for interactive generation of multiple block grids around general configurations. Though it has been freely available since 1989, it has not been widely embraced by the internal flow community due to a misconception that it was designed for external flow use only. In reality GRIDGEN has always worked for internal flow applications, and GRIDGEN ongoing enhancements are increasing the quality of and efficiency with which grids for external and internal flow problems may be constructed. The software consists of four codes used to perform the four steps of the grid generation process. GRIDBLOCK is first used to decompose the flow domain into a collection of component blocks and then to establish interblock connections and flow solver boundary conditions. GRIDGEN2D is then used to generate surface grids on the outer shell of each component block. GRIDGEN3D generates grid points on the interior of each block, and finally GRIDVUE3D is used to inspect the resulting multiple block grid. Three of these codes (GRIDBLOCK, GRIDGEN2D, and GRIDVUE3D) are highly interactive and graphical in nature, and currently run on Silicon Graphics, Inc., and IBM RS/6000 workstations. The lone batch code (GRIDGEN3D) may be run on any of several Unix based platforms. Surface grid generation in GRIDGEN2D is being improved with the addition of higher order surface definitions (NURBS and parametric surfaces input in IGES format and bicubic surfaces input in PATRAN Neutral File format) and double precision mathematics. In addition, two types of automation have been added to GRIDGEN2D that reduce the learning curve slope for new users and eliminate work for experienced users. Volume grid generation using GRIDGEN3D has been improved via the addition of an advanced hybrid control function formulation that provides both orthogonality and clustering control at the block faces and clustering control on the block interior.

  16. DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 5.

    DTIC Science & Technology

    1976-07-01

    Words Number Mission type: I=DAFS; 2= CAS 1 8 Estimated X coordinate of target 1 9 Estimated Y coordinate of target 1 10 Reject code: 0-mission unit... CAS 1 8 Abort indicator: O-no abort; 1-abort 1 9 X coordinate of target 1 10 " :oordinate of target 1 11 Aircraft munitions item code 6 12-17 Aircraft...L300 CALL TRNSMT U. TO TRANSMIT FIRST BLOCK OF DATA L100 YES REQUEST FOR INPU? LIOI CA " TRAN21I TO TRANSMIT LAST BLOCK OF DATA Figure VII-3-B- 10

  17. Sequential Prediction of Literacy Achievement for Specific Learning Disabilities Contrasting in Impaired Levels of Language in Grades 4 to 9.

    PubMed

    Sanders, Elizabeth A; Berninger, Virginia W; Abbott, Robert D

    Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in Grades 4 through 9 ( N = 103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n = 25), word reading and spelling (dyslexia, n = 60), or oral and written language (oral and written language learning disabilities, n = 18). That is, SLDs are defined on the basis of cascading level of language impairment (subword, word, and syntax/text). A five-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word-form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block 4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% of variance. Orthographic word-form coding uniquely predicted nearly every measure, whereas attention switching uniquely predicted only reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and oral and written language learning disabilities were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed.

  18. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  19. Dominant genetics using a yeast genomic library under the control of a strong inducible promoter.

    PubMed

    Ramer, S W; Elledge, S J; Davis, R W

    1992-12-01

    In Saccharomyces cerevisiae, numerous genes have been identified by selection from high-copy-number libraries based on "multicopy suppression" or other phenotypic consequences of overexpression. Although fruitful, this approach suffers from two major drawbacks. First, high copy number alone may not permit high-level expression of tightly regulated genes. Conversely, other genes expressed in proportion to dosage cannot be identified if their products are toxic at elevated levels. This work reports construction of a genomic DNA expression library for S. cerevisiae that circumvents both limitations by fusing randomly sheared genomic DNA to the strong, inducible yeast GAL1 promoter, which can be regulated by carbon source. The library obtained contains 5 x 10(7) independent recombinants, representing a breakpoint at every base in the yeast genome. This library was used to examine aberrant gene expression in S. cerevisiae. A screen for dominant activators of yeast mating response identified eight genes that activate the pathway in the absence of exogenous mating pheromone, including one previously unidentified gene. One activator was a truncated STE11 gene lacking approximately 1000 base pairs of amino-terminal coding sequence. In two different clones, the same GAL1 promoter-proximal ATG is in-frame with the coding sequence of STE11, suggesting that internal initiation of translation there results in production of a biologically active, truncated STE11 protein. Thus this library allows isolation based on dominant phenotypes of genes that might have been difficult or impossible to isolate from high-copy-number libraries.

  20. RNA Helicase Associated with AU-rich Element (RHAU/DHX36) Interacts with the 3′-Tail of the Long Non-coding RNA BC200 (BCYRN1)*

    PubMed Central

    Booy, Evan P.; McRae, Ewan K. S.; Howard, Ryan; Deo, Soumya R.; Ariyo, Emmanuel O.; Dzananovic, Edis; Meier, Markus; Stetefeld, Jörg; McKenna, Sean A.

    2016-01-01

    RNA helicase associated with AU-rich element (RHAU) is an ATP-dependent RNA helicase that demonstrates high affinity for quadruplex structures in DNA and RNA. To elucidate the significance of these quadruplex-RHAU interactions, we have performed RNA co-immunoprecipitation screens to identify novel RNAs bound to RHAU and characterize their function. In the course of this study, we have identified the non-coding RNA BC200 (BCYRN1) as specifically enriched upon RHAU immunoprecipitation. Although BC200 does not adopt a quadruplex structure and does not bind the quadruplex-interacting motif of RHAU, it has direct affinity for RHAU in vitro. Specifically designed BC200 truncations and RNase footprinting assays demonstrate that RHAU binds to an adenosine-rich region near the 3′-end of the RNA. RHAU truncations support binding that is dependent upon a region within the C terminus and is specific to RHAU isoform 1. Tests performed to assess whether BC200 interferes with RHAU helicase activity have demonstrated the ability of BC200 to act as an acceptor of unwound quadruplexes via a cytosine-rich region near the 3′-end of the RNA. Furthermore, an interaction between BC200 and the quadruplex-containing telomerase RNA was confirmed by pull-down assays of the endogenous RNAs. This leads to the possibility that RHAU may direct BC200 to bind and exert regulatory functions at quadruplex-containing RNA or DNA sequences. PMID:26740632

  1. Generation of transgenic papaya with double resistance to Papaya ringspot virus and Papaya leaf-distortion mosaic virus.

    PubMed

    Kung, Yi-Jung; Bau, Huey-Jiunn; Wu, Yi-Ling; Huang, Chiung-Huei; Chen, Tsui-Miao; Yeh, Shyi-Dong

    2009-11-01

    During the field tests of coat protein (CP)-transgenic papaya lines resistant to Papaya ringspot virus (PRSV), another Potyvirus sp., Papaya leaf-distortion mosaic virus (PLDMV), appeared as an emerging threat to the transgenic papaya. In this investigation, an untranslatable chimeric construct containing the truncated CP coding region of the PLDMV P-TW-WF isolate and the truncated CP coding region with the complete 3' untranslated region of PRSV YK isolate was transferred into papaya (Carica papaya cv. Thailand) via Agrobacterium-mediated transformation to generate transgenic plants with resistance to PLDMV and PRSV. Seventy-five transgenic lines were obtained and challenged with PRSV YK or PLDMV P-TW-WF by mechanical inoculation under greenhouse conditions. Thirty-eight transgenic lines showing no symptoms 1 month after inoculation were regarded as highly resistant lines. Southern and Northern analyses revealed that four weakly resistant lines have one or two inserts of the construct and accumulate detectable amounts of transgene transcript, whereas nine resistant lines contain two or three inserts without significant accumulation of transgene transcript. The results indicated that double virus resistance in transgenic lines resulted from double or more copies of the insert through the mechanism of RNA-mediated posttranscriptional gene silencing. Furthermore, three of nine resistant lines showed high levels of resistance to heterologous PRSV strains originating from Hawaii, Thailand, and Mexico. Our transgenic lines have great potential for controlling a number of PRSV strains and PLDMV in Taiwan and elsewhere.

  2. Analysis of protein-coding genetic variation in 60,706 humans.

    PubMed

    Lek, Monkol; Karczewski, Konrad J; Minikel, Eric V; Samocha, Kaitlin E; Banks, Eric; Fennell, Timothy; O'Donnell-Luria, Anne H; Ware, James S; Hill, Andrew J; Cummings, Beryl B; Tukiainen, Taru; Birnbaum, Daniel P; Kosmicki, Jack A; Duncan, Laramie E; Estrada, Karol; Zhao, Fengmei; Zou, James; Pierce-Hoffman, Emma; Berghout, Joanne; Cooper, David N; Deflaux, Nicole; DePristo, Mark; Do, Ron; Flannick, Jason; Fromer, Menachem; Gauthier, Laura; Goldstein, Jackie; Gupta, Namrata; Howrigan, Daniel; Kiezun, Adam; Kurki, Mitja I; Moonshine, Ami Levy; Natarajan, Pradeep; Orozco, Lorena; Peloso, Gina M; Poplin, Ryan; Rivas, Manuel A; Ruano-Rubio, Valentin; Rose, Samuel A; Ruderfer, Douglas M; Shakir, Khalid; Stenson, Peter D; Stevens, Christine; Thomas, Brett P; Tiao, Grace; Tusie-Luna, Maria T; Weisburd, Ben; Won, Hong-Hee; Yu, Dongmei; Altshuler, David M; Ardissino, Diego; Boehnke, Michael; Danesh, John; Donnelly, Stacey; Elosua, Roberto; Florez, Jose C; Gabriel, Stacey B; Getz, Gad; Glatt, Stephen J; Hultman, Christina M; Kathiresan, Sekar; Laakso, Markku; McCarroll, Steven; McCarthy, Mark I; McGovern, Dermot; McPherson, Ruth; Neale, Benjamin M; Palotie, Aarno; Purcell, Shaun M; Saleheen, Danish; Scharf, Jeremiah M; Sklar, Pamela; Sullivan, Patrick F; Tuomilehto, Jaakko; Tsuang, Ming T; Watkins, Hugh C; Wilson, James G; Daly, Mark J; MacArthur, Daniel G

    2016-08-18

    Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

  3. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  4. Feedback Effects in Computer-Based Skill Learning

    DTIC Science & Technology

    1989-09-12

    SUPPLEMENTARY NOTATION 17 COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) r FIELD GROUP SUB-GROUP I...rather than tangible feedback ( Barringer & Gholson, 1979) and when they receive punishment (either alone or witih reward) rather than reward alone...34graphed" response latencies across the four conditions ( r = .58), indicating that subjects were sensitive to block-by-block trends in their response

  5. [Construction and transfection of eucaryotic expression recombinant vector containing truncated region of UL83 gene of human cytomegalovirus and it's sheltered effect as DNA vaccine].

    PubMed

    Gao, Rong-Bao; Li, Yan-Qiu; Wang, Ming-Li

    2006-06-01

    To construct eucaryotic expression recombinant vector containing vivo truncated region of UL83 gene of human cytomegalovirus, realize its steady expression in Hep-2 cell, and study sheltered effect of the eucaryotic expression recombinant vector as DNA vaccine. A vivo truncated UL83 gene fragment encoding for truncated HCMV pp65 was obtained by PCR from human cytomegalovirus AD169 stock genome. By gene recombinant ways, the truncated UL83 gene fragment was cloned into eucaryotic expression vector pEGFP-C1 with reported gene coding GFP to construct recombinant vector pEGFP-C1-UL83. The recombinant vector pEGFP-C1-UL83 was tested by different methods including PCR, restriction digestion and gene sequencing. Test results showed the recombinant vector was constructed successfully. After pEGFP-C1-UL83 was transfected into Hep-2 cell by lipofectin mediation, expression of GFP and truncated pp65 fusion protein in Hep-2 cell was observed at different time points by fluorescence microscope. Results showed that quantity of fusion protein expression was the highest at 36h point. Then, Hep-2 cell was cultured selectively by RPMI-1640 containing G418 (200 microg/mL) to obtain a new cell stock of expressing truncated UL83 Gene fragment steadily. RT-PCR and Western blot results showed the truncated fragment of UL83 gene could be expressed steadily in Hep-2 cell. The result showed a new cell stock of expressing Tpp65 was established. This cell stock could be useful in some HCMV research fields, for example, it could be a tool in study of pp65 and HCMV infection, and it could provide a platform for the research into the therapy of HCMV infection. Immune sheltered effect of pEGFP-C1-UL83 as DNA vaccine was studied in vivo of HCMV congenital infection mouse model. The mouse model was immunized solely by pEGFP-C1-UL83, and was immunized jointly by pEGFP-C1-UL83 and its expression product. When the mouse was pregnant and brought to bed, differential antibody of anti-HCMV pp65 was tested by indirect ELISA in mother mouse, the infectious virus was separated with the method of virus separation, and pp65 antigen was checked up by indirect immunofluorescence staining in fetal mouse. Results showed differential antibody of anti-HCMV pp65 was produced in mouse model. Tilter of the antibody was from 1:2.51 to 1:50.79. Results of virus separation and pp65 checkup of fetal mouse brain tissue were negative. So the conclusion can be reached that pEGFP-C1-UL83 as DNA vaccine in vivo has sheltered effect which can prevent HCMV vertical transmission from mother mouse to her fetus.

  6. Wartime Tracking of Class I Surface Shipments from Production or Procurement to Destination

    DTIC Science & Technology

    1992-04-01

    Armed Forces I ICAF-FAP National Defense University 6c. ADDRESS (City, State, ard ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Lesley J...INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) 9c. ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK...COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse

  7. Evolution of the Gorda Escarpment, San Andreas fault and Mendocino triple junction from multichannel seismic data collected across the northern Vizcaino block, offshore northern California

    USGS Publications Warehouse

    Godfrey, N.J.; Meltzer, A.S.; Klemperer, S.L.; Trehu, A.M.; Leitner, B.; Clarke, S.H.; Ondrus, A.

    1998-01-01

    The Gorda Escarpment is a north facing scarp immediately south of the Mendocino transform fault (the Gorda/Juan de Fuca-Pacific plate boundary) between 126??W and the Mendocino triple junction. It elevates the seafloor at the northern edge of the Vizcaino block, part of the Pacific plate, ??? 1.5 km above the seafloor of the Gorda/Juan de Fuca plate to the north. Stratigraphy interpreted from multichannel seismic data across and close to the Gorda Escarpment suggests that the escarpment is a relatively recent pop-up feature caused by north-south compression across the plate boundary. Close to 126??W. the Vizcaino block acoustic basement shallows and is overlain by sediments that thin north toward the Gorda Escarpment. These sediments are tilted south and truncated at the seafloor. By contrast, in a localized region at the eastern end of the Gorda Escarpment, close to the Mendocino triple junction, the top of acoustic basement dips north and is overlain by a 2-km-thick wedge of pre-11 Ma sedimentary rocks that thickens north, toward the Gorda Escarpment. This wedge of sediments is restricted to the northeast corner of the Vizcaino block. Unless the wedge of sediments was a preexisting feature on the Vizcaino block before it was transferred from the North American to the Pacific plate, the strong spatial correlation between the sedimentary wedge and the triple junction suggests the entire Vizcaino block, with the San Andreas at its eastern boundary, has been part of the Pacific plate since significantly before 11 Ma.

  8. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  9. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  10. Personnel-General: Army Substance Abuse Program Civilian Services

    DTIC Science & Technology

    2001-10-15

    destroyed. Additional reproduction and distribution of completed records is prohibited. c. SECTION I. IDENTIFICATION. (1) Block I. Date of Report. Enter...AMPHETAMINE B BARBITUATES C COCAINE H HALLUCINOGENS (LSD) M METHAQUALONE, SEDATIVE, HYPNOTIC , OR ANXIOLYTIC O OPIATES P PHENCYCLIDINE (PCP) T CANNABIS...Table 5–6 Codes for TABLE F (T-DIAG-CODE) Code Rejection Reason 30390 ALCOHOL DEPENDENCE 30400 OPIOID DEPENDENCE 30410 SEDATIVE, HYPNOTIC , OR ANXIOLYTIC

  11. Inclusion Complexes of Diisopropyl Fluorophosphate with Cyclodextrins.

    DTIC Science & Technology

    1987-09-01

    SUPPLEMENTARY NOTATION For Submission to Journal of Catalysis. 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block...number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse if necessary and identify by block number) See Attached DTIC S ELECTESNOV 2 3 1987D 20

  12. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  13. EMG and mechanical changes during sprint starts at different front block obliquities.

    PubMed

    Guissard, N; Duchateau, J; Hainaut, K

    1992-11-01

    The effect of decreased front block obliquity on start velocity was studied during sprint starts. The electromyographic (EMG) activity of the medial gastrocnemius (MG), the soleus (Sol), and the vastus medialis (VM) was recorded and analyzed at a 70 degrees, a 50 degrees, and a 30 degrees angle between the foot plate surface and the horizontal. Integrated EMGs (IEMG) were compared with muscle length changes in the MG and Sol in relation to foot and knee movements. The results indicate that decreasing front block obliquity significantly (P < 0.05) increases the start velocity without any change to the total duration of the pushing phase and the overall EMG activity. This improvement in sprint start performance is associated with the enhanced contribution of the MG during eccentric and concentric phases of calf muscles contraction. In the "set position" the initial length of MG and Sol is increased at 50 degrees and 30 degrees as compared with 70 degrees. The subsequent stretch-shortening cycle is improved and contributes more effectively to the speed of the muscle shortening. Moreover, lengthening these muscles during the eccentric phase stretches the muscle spindles, and the reflex activities that contribute to the observed increase in the MG IEMG, are present when the slope of the block is reduced. The results indicate that decreasing front block obliquity induces neural and mechanical modifications that contribute to increasing the sprint start velocity without any increase in the duration of the pushing phase.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Numerical Analysis of Convection/Transpiration Cooling

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale

    1999-01-01

    An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.

  15. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  16. Problem-Solving Under Time Constraints: Alternatives for the Commander’s Estimate

    DTIC Science & Technology

    1990-03-26

    CHOOL OF ADVANCED MILITAR (If applicable) STUDIES, USAC&GSC IATZL-SWV 6. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ...NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP DECISIONJ*MAKING...OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code ) 122c. OFFICE SYMBOL MAJ TIMOTHY D. LYNCH 9 684-3437 1 AT71-.qWV DO Form 1473, JUN 86

  17. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  18. Xenomicrobiology: a roadmap for genetic code engineering.

    PubMed

    Acevedo-Rocha, Carlos G; Budisa, Nediljko

    2016-09-01

    Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer. © 2016 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  19. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  20. a Truncated Spherical Shell Model for Nuclear Collective Excitations: Applications to the Odd Mass Systems, Neutron-Proton Systems and Other Topics.

    NASA Astrophysics Data System (ADS)

    Wu, Hua

    One of the most elusive quantum system in nature is the nucleus, which is a strongly interacting many body system. In the hadronic (a la neutrons and protons) phase, the primary concern of this thesis, the nucleus' single particle excitations are intertwined with their various collective excitations. Although the underpinning of the nucleus is the spherical shell model, it is rendered powerless without a severe, but "intelligent" truncation of the infinite Hilbert space. The recently proposed Fermion Dynamical Symmetry Model (FDSM) is precisely such a truncation scheme and in which a symmetry-dictated turncation scheme is introduced in nuclear physics for the first time. In this thesis, extensions and explorations of the FDSM are made to specifically study the odd mass (where the most intricate mixing of the single particle and the collective excitations are observed) and the neutron-proton systems. In particular, we find that the previously successful phenomenological particle-rotor-model of the Copenhagen school can now be well understood microscopically via the FDSM. Furthermore, the well known Coriolis attenuation and variable moment of inertia effects are naturally understood from the model as well. A computer code FDU0 was written by one of us to study, for the first time, the numerical implications of the FDSM. Several collective modes were found even when the system does not admit a group chain description. In addition, the code is most suitable to study the connection between level statistical behavior (a al Gaussian Orthogonal Ensemble) and dynamical symmetry. It is found that there exist critical region of the interaction parameter space were the system behaves "chaotically". This information is certainly crucial to understanding quantum "chaotic" behavior. Also, some of the primitive assumptions of the FDSM are investigated and we concluded that the assumption of the quasi-spin behavior for the so-called abnormal parity particles is inadequate and needs to be extended. Suggestions of extensions are made. Finally, the newly developed physical quantity, the collective spin, is explored in terms of dynamical symmetries in the FDSM.

  1. Astrocyte truncated-TrkB mediates BDNF antiapoptotic effect leading to neuroprotection.

    PubMed

    Saba, Julieta; Turati, Juan; Ramírez, Delia; Carniglia, Lila; Durand, Daniela; Lasaga, Mercedes; Caruso, Carla

    2018-05-31

    Astrocytes are glial cells that help maintain brain homeostasis and become reactive in neurodegenerative processes releasing both harmful and beneficial factors. We have demonstrated that brain-derived neurotrophic factor (BDNF) expression is induced by melanocortins in astrocytes but BDNF actions in astrocytes are largely unknown. We hypothesize that BDNF may prevent astrocyte death resulting in neuroprotection. We found that BDNF increased astrocyte viability, preventing apoptosis induced by serum deprivation by decreasing active caspase-3 and p53 expression. The antiapoptotic action of BDNF was abolished by ANA-12 (a specific TrkB antagonist) and by K252a (a general Trk antagonist). Astrocytes only express the BDNF receptor TrkB truncated isoform 1, TrkB-T1. BDNF induced ERK, Akt and Src (a non-receptor tyrosine kinase) activation in astrocytes. Blocking ERK and Akt pathways abolished BDNF protection in serum deprivation-induced cell death. Moreover, BDNF protected astrocytes from death by 3-nitropropionic acid (3-NP), an effect also blocked by ANA-12, K252a, and inhibitors of ERK, calcium and Src. BDNF reduced reactive oxygen species (ROS) levels induced in astrocytes by 3-NP and increased xCT expression and glutathione levels. Astrocyte conditioned media (ACM) from untreated astrocytes partially protected PC12 neurons whereas ACM from BDNF-treated astrocytes completely protected PC12 neurons from 3-NP-induced apoptosis. Both ACM from control and BDNF-treated astrocytes markedly reduced ROS levels induced by 3-NP in PC12 cells. Our results demonstrate that BDNF protects astrocytes from cell death through TrkB-T1 signaling, exerts an antioxidant action, and induces release of neuroprotective factors from astrocytes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  3. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  4. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  5. Preliminary estimates of nucleon fluxes in a water target exposed to solar-flare protons: BRYNTRN versus Monte Carlo code

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Lone, M. A.; Wong, P. Y.; Costen, Robert C.

    1994-01-01

    A baryon transport code (BRYNTRN) has previously been verified using available Monte Carlo results for a solar-flare spectrum as the reference. Excellent results were obtained, but the comparisons were limited to the available data on dose and dose equivalent for moderate penetration studies that involve minor contributions from secondary neutrons. To further verify the code, the secondary energy spectra of protons and neutrons are calculated using BRYNTRN and LAHET (Los Alamos High-Energy Transport code, which is a Monte Carlo code). These calculations are compared for three locations within a water slab exposed to the February 1956 solar-proton spectrum. Reasonable agreement was obtained when various considerations related to the calculational techniques and their limitations were taken into account. Although the Monte Carlo results are preliminary, it appears that the neutron albedo, which is not currently treated in BRYNTRN, might be a cause for the large discrepancy seen at small penetration depths. It also appears that the nonelastic neutron production cross sections in BRYNTRN may underestimate the number of neutrons produced in proton collisions with energies below 200 MeV. The notion that the poor energy resolution in BRYNTRN may cause a large truncation error in neutron elastic scattering requires further study.

  6. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P$sub 1$) in up to three- dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First- order perturbation analysis capability is available at the macroscopic cross section level. (auth)

  7. Initial development of 5D COGENT

    NASA Astrophysics Data System (ADS)

    Cohen, R. H.; Lee, W.; Dorf, M.; Dorr, M.

    2015-11-01

    COGENT is a continuum gyrokinetic edge code being developed by the by the Edge Simulation Laboratory (ESL) collaboration. Work to date has been primarily focussed on a 4D (axisymmetric) version that models transport properties of edge plasmas. We have begun development of an initial 5D version to study edge turbulence, with initial focus on kinetic effects on blob dynamics and drift-wave instability in a shearless magnetic field. We are employing compiler directives and preprocessor macros to create a single source code that can be compiled in 4D or 5D, which helps to ensure consistency of physics representation between the two versions. A key aspect of COGENT is the employment of mapped multi-block grid capability to handle the complexity of diverter geometry. It is planned to eventually exploit this capability to handle magnetic shear, through a series of successively skewed unsheared grid blocks. The initial version has an unsheared grid and will be used to explore the degree to which a radial domain must be block decomposed. We report on the status of code development and initial tests. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344.

  8. The FORTRAN static source code analyzer program (SAP) system description

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.

    1982-01-01

    A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.

  9. Chromosomal Targeting by the Type III-A CRISPR-Cas System Can Reshape Genomes in Staphylococcus aureus

    PubMed Central

    Guan, Jing; Wang, Wanying

    2017-01-01

    ABSTRACT CRISPR-Cas (clustered regularly interspaced short palindromic repeat [CRISPR]-CRISPR-associated protein [Cas]) systems can provide protection against invading genetic elements by using CRISPR RNAs (crRNAs) as a guide to locate and degrade the target DNA. CRISPR-Cas systems have been classified into two classes and five types according to the content of cas genes. Previous studies have indicated that CRISPR-Cas systems can avoid viral infection and block plasmid transfer. Here we show that chromosomal targeting by the Staphylococcus aureus type III-A CRISPR-Cas system can drive large-scale genome deletion and alteration within integrated staphylococcal cassette chromosome mec (SCCmec). The targeting activity of the CRISPR-Cas system is associated with the complementarity between crRNAs and protospacers, and 10- to 13-nucleotide truncations of spacers partially block CRISPR attack and more than 13-nucleotide truncation can fully abolish targeting, suggesting that a minimal length is required to license cleavage. Avoiding base pairings in the upstream region of protospacers is also necessary for CRISPR targeting. Successive trinucleotide complementarity between the 5′ tag of crRNAs and protospacers can disrupt targeting. Our findings reveal that type III-A CRISPR-Cas systems can modulate bacterial genome stability and may serve as a high-efficiency tool for deleting resistance or virulence genes in bacteria. IMPORTANCE Staphylococcus aureus is a pathogen that can cause a wide range of infections in humans. Studies have suggested that CRISPR-Cas systems can drive the loss of integrated mobile genetic elements (MGEs) by chromosomal targeting. Here we demonstrate that CRISPR-mediated cleavage contributes to the partial deletion of integrated SCCmec in methicillin-resistant S. aureus (MRSA), which provides a strategy for the treatment of MRSA infections. The spacer within artificial CRISPR arrays should contain more than 25 nucleotides for immunity, and consecutive trinucleotide pairings between a selected target and the 5′ tag of crRNA can block targeting. These findings add to our understanding of the molecular mechanisms of the type III-A CRISPR-Cas system and provide a novel strategy for the exploitation of engineered CRISPR immunity against integrated MGEs in bacteria for clinical and industrial applications. PMID:29152580

  10. Chromosomal Targeting by the Type III-A CRISPR-Cas System Can Reshape Genomes in Staphylococcus aureus.

    PubMed

    Guan, Jing; Wang, Wanying; Sun, Baolin

    2017-01-01

    CRISPR-Cas (clustered regularly interspaced short palindromic repeat [CRISPR]-CRISPR-associated protein [Cas]) systems can provide protection against invading genetic elements by using CRISPR RNAs (crRNAs) as a guide to locate and degrade the target DNA. CRISPR-Cas systems have been classified into two classes and five types according to the content of cas genes. Previous studies have indicated that CRISPR-Cas systems can avoid viral infection and block plasmid transfer. Here we show that chromosomal targeting by the Staphylococcus aureus type III-A CRISPR-Cas system can drive large-scale genome deletion and alteration within integrated staphylococcal cassette chromosome mec (SCC mec ). The targeting activity of the CRISPR-Cas system is associated with the complementarity between crRNAs and protospacers, and 10- to 13-nucleotide truncations of spacers partially block CRISPR attack and more than 13-nucleotide truncation can fully abolish targeting, suggesting that a minimal length is required to license cleavage. Avoiding base pairings in the upstream region of protospacers is also necessary for CRISPR targeting. Successive trinucleotide complementarity between the 5' tag of crRNAs and protospacers can disrupt targeting. Our findings reveal that type III-A CRISPR-Cas systems can modulate bacterial genome stability and may serve as a high-efficiency tool for deleting resistance or virulence genes in bacteria. IMPORTANCE Staphylococcus aureus is a pathogen that can cause a wide range of infections in humans. Studies have suggested that CRISPR-Cas systems can drive the loss of integrated mobile genetic elements (MGEs) by chromosomal targeting. Here we demonstrate that CRISPR-mediated cleavage contributes to the partial deletion of integrated SCC mec in methicillin-resistant S. aureus (MRSA), which provides a strategy for the treatment of MRSA infections. The spacer within artificial CRISPR arrays should contain more than 25 nucleotides for immunity, and consecutive trinucleotide pairings between a selected target and the 5' tag of crRNA can block targeting. These findings add to our understanding of the molecular mechanisms of the type III-A CRISPR-Cas system and provide a novel strategy for the exploitation of engineered CRISPR immunity against integrated MGEs in bacteria for clinical and industrial applications.

  11. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  12. Introduction to Forward-Error-Correcting Coding

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.

    1996-01-01

    This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.

  13. 14 CFR Sec. 1-4 - System of accounts coding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is assigned for each balance sheet and profit and loss account. Each balance sheet account is numbered sequentially, within blocks, designating basic balance sheet classifications. The first two digits of the four...

  14. Reevaluation of RINT1 as a breast cancer predisposition gene.

    PubMed

    Li, Na; Thompson, Ella R; Rowley, Simone M; McInerny, Simone; Devereux, Lisa; Goode, David; Investigators, LifePool; Wong-Brown, Michelle W; Scott, Rodney J; Trainer, Alison H; Gorringe, Kylie L; James, Paul A; Campbell, Ian G

    2016-09-01

    Rad50 interactor 1 (RINT1) has recently been reported as an intermediate-penetrance (odds ratio 3.24) breast cancer susceptibility gene, as well as a risk factor for Lynch syndrome. The coding regions and exon-intron boundaries of RINT1 were sequenced in 2024 familial breast cancer cases previously tested negative for BRCA1, BRCA2, and PALB2 mutations and 1886 population-matched cancer-free controls using HaloPlex Targeted Enrichment Assays. Only one RINT1 protein-truncating variant was detected in a control. No excess was observed in the total number of rare variants (truncating and missense) (28, 1.38 %, vs. 27, 1.43 %. P > 0.999) or in the number of variants predicted to be pathogenic by various in silico tools (Condel, Polyphen2, SIFT, and CADD) in the cases compared to the controls. In addition, there was no difference in the incidence of classic Lynch syndrome cancers in RINT1 rare variant-carrying families compared to RINT1 wild-type families. This study had 90 % power to detect an odds ratio of at least 2.06, and the results do not provide any support for RINT1 being a moderate-penetrance breast cancer susceptibility gene, although larger studies will be required to exclude more modest effects. This study emphasizes the need for caution before designating a cancer predisposition role for any gene based on very rare truncating variants and in silico-predicted missense variants.

  15. Design of a numerical model of lung by means of a special boundary condition in the truncated branches.

    PubMed

    Tena, Ana F; Fernández, Joaquín; Álvarez, Eduardo; Casan, Pere; Walters, D Keith

    2017-06-01

    The need for a better understanding of pulmonary diseases has led to increased interest in the development of realistic computational models of the human lung. To minimize computational cost, a reduced geometry model is used for a model lung airway geometry up to generation 16. Truncated airway branches require physiologically realistic boundary conditions to accurately represent the effect of the removed airway sections. A user-defined function has been developed, which applies velocities mapped from similar locations in fully resolved airway sections. The methodology can be applied in any general purpose computational fluid dynamics code, with the only limitation that the lung model must be symmetrical in each truncated branch. Unsteady simulations have been performed to verify the operation of the model. The test case simulates a spirometry because the lung is obliged to rapidly perform both inspiration and expiration. Once the simulation was completed, the obtained pressure in the lower level of the lung was used as a boundary condition. The output velocity, which is a numerical spirometry, was compared with the experimental spirometry for validation purposes. This model can be applied for a wide range of patient-specific resolution levels. If the upper airway generations have been constructed from a computed tomography scan, it would be possible to quickly obtain a complete reconstruction of the lung specific to a specific person, which would allow individualized therapies. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Sequencing the GRHL3 Coding Region Reveals Rare Truncating Mutations and a Common Susceptibility Variant for Nonsyndromic Cleft Palate

    PubMed Central

    Mangold, Elisabeth; Böhmer, Anne C.; Ishorst, Nina; Hoebel, Ann-Kathrin; Gültepe, Pinar; Schuenke, Hannah; Klamt, Johanna; Hofmann, Andrea; Gölz, Lina; Raff, Ruth; Tessmann, Peter; Nowak, Stefanie; Reutter, Heiko; Hemprich, Alexander; Kreusch, Thomas; Kramer, Franz-Josef; Braumann, Bert; Reich, Rudolf; Schmidt, Gül; Jäger, Andreas; Reiter, Rudolf; Brosch, Sibylle; Stavusis, Janis; Ishida, Miho; Seselgyte, Rimante; Moore, Gudrun E.; Nöthen, Markus M.; Borck, Guntram; Aldhorae, Khalid A.; Lace, Baiba; Stanier, Philip; Knapp, Michael; Ludwig, Kerstin U.

    2016-01-01

    Nonsyndromic cleft lip with/without cleft palate (nsCL/P) and nonsyndromic cleft palate only (nsCPO) are the most frequent subphenotypes of orofacial clefts. A common syndromic form of orofacial clefting is Van der Woude syndrome (VWS) where individuals have CL/P or CPO, often but not always associated with lower lip pits. Recently, ∼5% of VWS-affected individuals were identified with mutations in the grainy head-like 3 gene (GRHL3). To investigate GRHL3 in nonsyndromic clefting, we sequenced its coding region in 576 Europeans with nsCL/P and 96 with nsCPO. Most strikingly, nsCPO-affected individuals had a higher minor allele frequency for rs41268753 (0.099) than control subjects (0.049; p = 1.24 × 10−2). This association was replicated in nsCPO/control cohorts from Latvia, Yemen, and the UK (pcombined = 2.63 × 10−5; ORallelic = 2.46 [95% CI 1.6–3.7]) and reached genome-wide significance in combination with imputed data from a GWAS in nsCPO triads (p = 2.73 × 10−9). Notably, rs41268753 is not associated with nsCL/P (p = 0.45). rs41268753 encodes the highly conserved p.Thr454Met (c.1361C>T) (GERP = 5.3), which prediction programs denote as deleterious, has a CADD score of 29.6, and increases protein binding capacity in silico. Sequencing also revealed four novel truncating GRHL3 mutations including two that were de novo in four families, where all nine individuals harboring mutations had nsCPO. This is important for genetic counseling: given that VWS is rare compared to nsCPO, our data suggest that dominant GRHL3 mutations are more likely to cause nonsyndromic than syndromic CPO. Thus, with rare dominant mutations and a common risk variant in the coding region, we have identified an important contribution for GRHL3 in nsCPO. PMID:27018475

  17. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  18. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  19. Efficient Polar Coding of Quantum Information

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato

    2012-08-01

    Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.

  20. Intra prediction using face continuity in 360-degree video coding

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  1. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  2. A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.

    2017-06-01

    The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.

  3. Pseudo-polyprotein translated from the full-length ORF1 of capillovirus is important for pathogenicity, but a truncated ORF1 protein without variable and CP regions is sufficient for replication.

    PubMed

    Hirata, Hisae; Yamaji, Yasuyuki; Komatsu, Ken; Kagiwada, Satoshi; Oshima, Kenro; Okano, Yukari; Takahashi, Shuichiro; Ugaki, Masashi; Namba, Shigetou

    2010-09-01

    The first open-reading frame (ORF) of the genus Capillovirus encodes an apparently chimeric polyprotein containing conserved regions for replicase (Rep) and coat protein (CP), while other viruses in the family Flexiviridae have separate ORFs encoding these proteins. To investigate the role of the full-length ORF1 polyprotein of capillovirus, we generated truncation mutants of ORF1 of apple stem grooving virus by inserting a termination codon into the variable region located between the putative Rep- and CP-coding regions. These mutants were capable of systemic infection, although their pathogenicity was attenuated. In vitro translation of ORF1 produced both the full-length polyprotein and the smaller Rep protein. The results of in vivo reporter assays suggested that the mechanism of this early termination is a ribosomal -1 frame-shift occurring downstream from the conserved Rep domains. The mechanism of capillovirus gene expression and the very close evolutionary relationship between the genera Capillovirus and Trichovirus are discussed. Copyright (c) 2010. Published by Elsevier B.V.

  4. BRCA1 and BRCA2 mutation analysis of early-onset and familial breast cancer cases in Mexico.

    PubMed

    Ruiz-Flores, Pablo; Sinilnikova, Olga M; Badzioch, Michael; Calderon-Garcidueñas, A L; Chopin, Sandrine; Fabrice, Odefrey; González-Guerrero, J F; Szabo, Csilla; Lenoir, Gilbert; Goldgar, David E; Barrera-Saldaña, Hugo A

    2002-12-01

    The entire coding regions of BRCA1 and BRCA2 were screened for mutations by heteroduplex analysis in 51 Mexican breast cancer patients. One BRCA1 and one BRCA2 truncating mutation each was identified in the group of 32 (6%) early-onset breast cancer patients (< or =35 years). Besides these two likely deleterious mutations, eight rare variants of unknown significance, mostly in the BRCA2 gene, were detected in six of 32 (19%) early-onset breast cancer cases and in three of 17 (18%) site-specific breast cancer families, one containing a male breast cancer case. No mutations or rare sequence variants have been identified in two additional families including each an early-onset breast cancer case and an ovarian cancer patient. The two truncating mutations (BRCA1 3857delT; BRCA2 2663-2664insA) and six of the rare variants have never been reported before and may be of country-specific origin. The majority of the alterations appeared to be distinct, with only one of them being observed in more than one family. Copyright 2002 Wiley-Liss, Inc.

  5. Truncated presequences of mitochondrial F1-ATPase beta subunit from Nicotiana plumbaginifolia transport CAT and GUS proteins into mitochondria of transgenic tobacco.

    PubMed

    Chaumont, F; Silva Filho, M de C; Thomas, D; Leterme, S; Boutry, M

    1994-02-01

    The mitochondrial F1-ATPase beta subunit (ATPase-beta) of Nicotiana plumbaginifolia is nucleus-encoded as a precursor containing an NH2-terminal extension. By sequencing the mature N. tabacum ATPase-beta, we determined the length of the presequence, viz. 54 residues. To define the essential regions of this presequence, we produced a series of 3' deletions in the sequence coding for the 90 NH2-terminal residues of ATPase-beta. The truncated sequences were fused with the chloramphenicol acetyl transferase (cat) and beta-glucuronidase (gus) genes and introduced into tobacco plants. From the observed distribution of CAT and GUS activity in the plant cells, we conclude that the first 23 amino-acid residues of ATPase-beta remain capable of specifically targeting reporter proteins into mitochondria. Immunodetection in transgenic plants and in vitro import experiments with various CAT fusion proteins show that the precursors are processed at the expected cleavage site but also at a cryptic site located in the linker region between the presequence and the first methionine of native CAT.

  6. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  7. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  8. Truncating mutations in the last exon of NOTCH3 cause lateral meningocele syndrome.

    PubMed

    Gripp, Karen W; Robbins, Katherine M; Sobreira, Nara L; Witmer, P Dane; Bird, Lynne M; Avela, Kristiina; Makitie, Outi; Alves, Daniela; Hogue, Jacob S; Zackai, Elaine H; Doheny, Kimberly F; Stabley, Deborah L; Sol-Church, Katia

    2015-02-01

    Lateral meningocele syndrome (LMS, OMIM%130720), also known as Lehman syndrome, is a very rare skeletal disorder with facial anomalies, hypotonia and meningocele-related neurologic dysfunction. The characteristic lateral meningoceles represent the severe end of the dural ectasia spectrum and are typically most severe in the lower spine. Facial features of LMS include hypertelorism and telecanthus, high arched eyebrows, ptosis, midfacial hypoplasia, micrognathia, high and narrow palate, low-set ears and a hypotonic appearance. Hyperextensibility, hernias and scoliosis reflect a connective tissue abnormality, and aortic dilation, a high-pitched nasal voice, wormian bones and osteolysis may be present. Lateral meningocele syndrome has phenotypic overlap with Hajdu-Cheney syndrome. We performed exome resequencing in five unrelated individuals with LMS and identified heterozygous truncating NOTCH3 mutations. In an additional unrelated individual Sanger sequencing revealed a deleterious variant in the same exon 33. In total, five novel de novo NOTCH3 mutations were identified in six unrelated patients. One had a 26 bp deletion (c.6461_6486del, p.G2154fsTer78), two carried the same single base pair insertion (c.6692_93insC, p.P2231fsTer11), and three individuals had a nonsense point mutation at c.6247A > T (pK2083*), c.6663C > G (p.Y2221*) or c.6732C > A, (p.Y2244*). All mutations cluster into the last coding exon, resulting in premature termination of the protein and truncation of the negative regulatory proline-glutamate-serine-threonine rich PEST domain. Our results suggest that mutant mRNA products escape nonsense mediated decay. The truncated NOTCH3 may cause gain-of-function through decreased clearance of the active intracellular product, resembling NOTCH2 mutations in the clinically related Hajdu-Cheney syndrome and contrasting the NOTCH3 missense mutations causing CADASIL. © 2014 Wiley Periodicals, Inc.

  9. FGF-mediated mesoderm induction involves the Src-family kinase Laloo.

    PubMed

    Weinstein, D C; Marden, J; Carnevali, F; Hemmati-Brivanlou, A

    1998-08-27

    During embryogenesis, inductive interactions underlie the development of much of the body plan. In Xenopus laevis, factors secreted from the vegetal pole induce mesoderm in the adjacent marginal zone; members of both the transforming growth factor-beta (TGF-beta) and fibroblast growth factor (FGF) ligand families seem to have critical roles in this process. Here we report the identification and characterization of laloo, a novel participant in the signal transduction cascade linking extracellular, mesoderm-inducing signals to the nucleus, where alteration of cell fate is driven by changes in gene expression. Overexpression of laloo, a member of the Src-related gene family, in Xenopus embryos gives rise to ectopic posterior structures that frequently contain axial tissue. Laloo induces mesoderm in Xenopus ectodermal explants; this induction is blocked by reagents that disrupt the FGF signalling pathway. Conversely, expression of a dominant-inhibitory Laloo mutant blocks mesoderm induction by FGF and causes severe posterior truncations in vivo. This work provides the first evidence that a Src-related kinase is involved in vertebrate mesoderm induction.

  10. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  11. Participation of mitochondrial diazepam binding inhibitor receptors in the anticonflict, antineophobic and anticonvulsant action of 2-aryl-3-indoleacetamide and imidazopyridine derivatives.

    PubMed

    Auta, J; Romeo, E; Kozikowski, A; Ma, D; Costa, E; Guidotti, A

    1993-05-01

    The 2-hexyl-indoleacetamide derivative, FGIN-1-27 [N,N-di-n-hexyl-2- (4-fluorophenyl)indole-3-acetamide], and the imidazopyridine derivative, alpidem, both bind with high affinity to glial mitochondrial diazepam binding inhibitor receptors (MDR) and increase mitochondrial steroidogenesis. Although FGIN-1-27 is selective for the MDR, alpidem also binds to the allosteric modulatory site of the gamma-aminobutyric acidA receptor where the benzodiazepines bind. FGIN-1-27 and alpidem, like the neurosteroid 3 alpha,21-dehydroxy-5 alpha-pregnane-20-one (THDOC), clonazepam and zolpidem (the direct allosteric modulators of gamma-aminobutyric acidA receptors) delay the onset of isoniazid and metrazol-induced convulsions. The anti-isoniazid convulsant action of FGIN-1-27 and alpidem, but not that of THDOC, is blocked by PK 11195. In contrast, flumazenil blocked completely the anticonvulsant action of clonazepam and zolpidem and partially blocked that of alpidem, but it did not affect the anticonvulsant action of THDOC and FGIN-1-27. Alpidem, like clonazepam, zolpidem and diazepam, but not THDOC or FGIN-1-27, delay the onset of bicuculline-induced convulsions. In two animal models of anxiety, the neophobic behavior in the elevated plus maze test and the conflict-punishment behavior in the Vogel conflict test, THDOC and FGIN-1-27 elicited anxiolytic-like effects in a manner that is flumazenil insensitive, whereas alpidem elicited a similar anxiolytic effect, but is partially blocked by flumazenil. Whereas PK 11195 blocked the effect of FGIN-1-27 and partially blocked alpidem, it did not affect THDOC in both animal models of anxiety.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Computer Description of the Field Artillery Ammunition Supply Vehicle

    DTIC Science & Technology

    1983-04-01

    Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and

  13. A Combinatorial Geometry Computer Description of the MEP-021A Generator Set

    DTIC Science & Technology

    1979-02-01

    Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] 󈧚*7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack

  14. Future Research Needs for Dredgeability of Rock: Rock Dredging Workshop Held in Jacksonville, Florida on 25-26 July 1985.

    DTIC Science & Technology

    1986-09-01

    ORGANIZATION Gjeoteehnical Laborator WESGR-M 6c ADDRESS (City, Slate, and ZIP Code ) 7b ADDRESS(City, State. and ZIP Code ) PO Box 631 Vicksburg, MS 39180...of Engineers 8< ADDRESS(City, State, and ZIP Code ) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT.. ", 1 :, • ; I, - u It ., " ’ ~f...Springfield, VA 22161 17 COSATI CODES 18 SUBJECT TERMS (Continue-On revprse of necessary and identify by block number) " FIELD GROUP SUB GROUP

  15. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  16. An Examination of the Reliability of the Organizational Assessment Package (OAP).

    DTIC Science & Technology

    1981-07-01

    reactiv- ity or pretest sensitization (Bracht and Glass, 1968) may occur. In this case, the change from pretest to posttest can be caused just by the...content items. The blocks for supervisor’s code were left blank, work group code was coded as all ones , and each person’s seminar number was coded in...63 5 19 .91 .74 5 (Work Group Effective- ness) 822 19 .83 .42 7 17 .90 .57 7 (Job Related Sati sfacti on ) 823 16 .91 .84 2 18 .93 .87 2 (Job Related

  17. Neural Coding of Formant-Exaggerated Speech in the Infant Brain

    ERIC Educational Resources Information Center

    Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward

    2011-01-01

    Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…

  18. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  19. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  20. Transonic Navier-Stokes wing solution using a zonal approach. Part 1: Solution methodology and code validation

    NASA Technical Reports Server (NTRS)

    Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.

    1986-01-01

    A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.

  1. Resonant Acoustic Determination of Complex Elastic Moduli

    DTIC Science & Technology

    1991-03-01

    Classification uncssified/.unimled - sae as report [] DnC ui, Unclassified 22a Name of Responsible Individual 22b Telephone (Include Area code) 22c Office Symbol...4090 DISP " Run: "Block2$ 4100 WAIT 1 4110 DISP "Mode: "Blocic3$ 4120 WAIT 1 4130 DISP" Date: "Block4$ 4140 WAIT 1 4150 DISP "Mass: "Mass;"grams

  2. Methods and codes for neutronic calculations of the MARIA research reactor.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrzejewski, K.; Kulikowska, T.; Bretscher, M. M.

    2002-02-18

    The core of the MARIA high flux multipurpose research reactor is highly heterogeneous. It consists of beryllium blocks arranged in 6 x 8 matrix, tubular fuel assemblies, control rods and irradiation channels. The reflector is also heterogeneous and consists of graphite blocks clad with aluminum. Its structure is perturbed by the experimental beam tubes. This paper presents methods and codes used to calculate the MARIA reactor neutronics characteristics and experience gained thus far at IAE and ANL. At ANL the methods of MARIA calculations were developed in connection with the RERTR program. At IAE the package of programs was developedmore » to help its operator in optimization of fuel utilization.« less

  3. A grid generation system for multi-disciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Samareh-Abolhassani, Jamshid

    1995-01-01

    A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.

  4. Study on a novel laser target detection system based on software radio technique

    NASA Astrophysics Data System (ADS)

    Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui

    2008-12-01

    This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.

  5. Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Colella, Phillip

    2007-11-01

    We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.

  6. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  7. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437

  8. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.

  9. A fully decompressed synthetic bacteriophage øX174 genome assembled and archived in yeast.

    PubMed

    Jaschke, Paul R; Lieberman, Erica K; Rodriguez, Jon; Sierra, Adrian; Endy, Drew

    2012-12-20

    The 5386 nucleotide bacteriophage øX174 genome has a complicated architecture that encodes 11 gene products via overlapping protein coding sequences spanning multiple reading frames. We designed a 6302 nucleotide synthetic surrogate, øX174.1, that fully separates all primary phage protein coding sequences along with cognate translation control elements. To specify øX174.1f, a decompressed genome the same length as wild type, we truncated the gene F coding sequence. We synthesized DNA encoding fragments of øX174.1f and used a combination of in vitro- and yeast-based assembly to produce yeast vectors encoding natural or designer bacteriophage genomes. We isolated clonal preparations of yeast plasmid DNA and transfected E. coli C strains. We recovered viable øX174 particles containing the øX174.1f genome from E. coli C strains that independently express full-length gene F. We expect that yeast can serve as a genomic 'drydock' within which to maintain and manipulate clonal lineages of other obligate lytic phage. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  11. Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage

    NASA Technical Reports Server (NTRS)

    Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.

    2010-01-01

    The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.

  12. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  13. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  14. Cloning and sequencing of a laccase gene from the lignin-degrading basidiomycete Pleurotus ostreatus.

    PubMed Central

    Giardina, P; Cannio, R; Martirani, L; Marzullo, L; Palmieri, G; Sannia, G

    1995-01-01

    The gene (pox1) encoding a phenol oxidase from Pleurotus ostreatus, a lignin-degrading basidiomycete, was cloned and sequenced, and the corresponding pox1 cDNA was also synthesized and sequenced. The isolated gene consists of 2,592 bp, with the coding sequence being interrupted by 19 introns and flanked by an upstream region in which putative CAAT and TATA consensus sequences could be identified at positions -174 and -84, respectively. The isolation of a second cDNA (pox2 cDNA), showing 84% similarity, and of the corresponding truncated genomic clones demonstrated the existence of a multigene family coding for isoforms of laccase in P. ostreatus. PCR amplifications of specific regions on the DNA of isolated monokaryons proved that the two genes are not allelic forms. The POX1 amino acid sequence deduced was compared with those of other known laccases from different fungi. PMID:7793961

  15. Altruistic functions for selfish DNA.

    PubMed

    Faulkner, Geoffrey J; Carninci, Piero

    2009-09-15

    Mammalian genomes are comprised of 30-50% transposed elements (TEs). The vast majority of these TEs are truncated and mutated fragments of retrotransposons that are no longer capable of transposition. Although initially regarded as important factors in the evolution of gene regulatory networks, TEs are now commonly perceived as neutrally evolving and non-functional genomic elements. In a major development, recent works have strongly contradicted this "selfish DNA" or "junk DNA" dogma by demonstrating that TEs use a host of novel promoters to generate RNA on a massive scale across most eukaryotic cells. This transcription frequently functions to control the expression of protein-coding genes via alternative promoters, cis regulatory non protein-coding RNAs and the formation of double stranded short RNAs. If considered in sum, these findings challenge the designation of TEs as selfish and neutrally evolving genomic elements. Here, we will expand upon these themes and discuss challenges in establishing novel TE functions in vivo.

  16. VizieR Online Data Catalog: Blazars in the Swift-BAT hard X-ray sky (Maselli+, 2010)

    NASA Astrophysics Data System (ADS)

    Maselli, A.; Cusumano, G.; Massaro, E.; La Parola, V.; Segreto, A.; Sbarufatti, B.

    2010-06-01

    We reported the list of hard X-ray blazars obtained adopting sigma=3 as detection threshold: with this choice a number of three spurious sources over a total of 121 blazars is expected. Each blazar is identified by a three-letter code, where the first two are BZ for blazar and the third one specifies the type, followed by the truncated equatorial coordinates (J2000). The codes are defined in the "Note (1)" below. We obtained 69 FSRQs, 24 BL Lac objects and 28 blazars of uncertain classification, representing 4.4%, 2.4% and 11.0% of the corresponding populations classified in the BZCAT, respectively. This sample has been compared with other lists and catalogues found in literature (Tueller et al., 2010, Cat. J/ApJS/186/378, Ajello et al. 2009ApJ...699..603A, Cusumano et al., 2010, Cat. J/A+A/510/A48). (1 data file).

  17. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    NASA Astrophysics Data System (ADS)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.

  18. Tail Biting Trellis Representation of Codes: Decoding and Construction

    NASA Technical Reports Server (NTRS)

    Shao. Rose Y.; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.

  19. Rapid Prediction of Unsteady Three-Dimensional Viscous Flows in Turbopump Geometries

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel J.

    1998-01-01

    A program is underway to improve the efficiency of a three-dimensional Navier-Stokes code and generalize it for nozzle and turbopump geometries. Code modifications will include the implementation of parallel processing software, incorporating new physical models and generalizing the multi-block capability to allow the simultaneous simulation of nozzle and turbopump configurations. The current report contains details of code modifications, numerical results of several flow simulations and the status of the parallelization effort.

  20. Predictions of GPS X-Set Performance during the Places Experiment

    DTIC Science & Technology

    1979-07-01

    previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I

  1. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  2. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 5: Unsteady counterrotation ducted propfan analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.

    1993-01-01

    The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOAR\\CR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This document is the final report describing the theoretical basis and analytical results from the ADPAC-AOACR codes developed under task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR Program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.

  3. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 5: Unsteady counterrotation ducted propfan analysis. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles

    1993-01-01

    The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.

  4. Full f-p Shell Calculation of {sup 51}Ca and {sup 51}Sc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novoselsky, A.; Vallieres, M.; Laadan, O.

    The spectra and the electromagnetic transitions of the nuclei {sup 51}Ca and {sup 51}Sc with 11 nucleons in the {ital f-p} shell are described in the nuclear shell-model approach by using two different two-body effective interactions. The full {ital f-p} shell basis functions are used with no truncation. The new parallel shell-model computer code DUPSM (Drexel University parallel shell model), that we recently developed, has been used. The calculations have been done on the MOSIX parallel machine at the Hebrew University of Jerusalem. {copyright} {ital 1997} {ital The American Physical Society}

  5. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  6. Function and distribution of 5-HT2 receptors in the honeybee (Apis mellifera).

    PubMed

    Thamm, Markus; Rolke, Daniel; Jordan, Nadine; Balfanz, Sabine; Schiffer, Christian; Baumann, Arnd; Blenau, Wolfgang

    2013-01-01

    Serotonin plays a pivotal role in regulating and modulating physiological and behavioral processes in both vertebrates and invertebrates. In the honeybee (Apis mellifera), serotonin has been implicated in division of labor, visual processing, and learning processes. Here, we present the cloning, heterologous expression, and detailed functional and pharmacological characterization of two honeybee 5-HT2 receptors. Honeybee 5-HT2 receptor cDNAs were amplified from brain cDNA. Recombinant cell lines were established constitutively expressing receptor variants. Pharmacological properties of the receptors were investigated by Ca(2+) imaging experiments. Quantitative PCR was applied to explore the expression patterns of receptor mRNAs. The honeybee 5-HT2 receptor class consists of two subtypes, Am5-HT2α and Am5-HT2β. Each receptor gene also gives rise to alternatively spliced mRNAs that possibly code for truncated receptors. Only activation of the full-length receptors with serotonin caused an increase in the intracellular Ca(2+) concentration. The effect was mimicked by the agonists 5-methoxytryptamine and 8-OH-DPAT at low micromolar concentrations. Receptor activities were blocked by established 5-HT receptor antagonists such as clozapine, methiothepin, or mianserin. High transcript numbers were detected in exocrine glands suggesting that 5-HT2 receptors participate in secretory processes in the honeybee. This study marks the first molecular and pharmacological characterization of two 5-HT2 receptor subtypes in the same insect species. The results presented should facilitate further attempts to unravel central and peripheral effects of serotonin mediated by these receptors.

  7. Function and Distribution of 5-HT2 Receptors in the Honeybee (Apis mellifera)

    PubMed Central

    Thamm, Markus; Rolke, Daniel; Jordan, Nadine; Balfanz, Sabine; Schiffer, Christian; Baumann, Arnd; Blenau, Wolfgang

    2013-01-01

    Background Serotonin plays a pivotal role in regulating and modulating physiological and behavioral processes in both vertebrates and invertebrates. In the honeybee (Apis mellifera), serotonin has been implicated in division of labor, visual processing, and learning processes. Here, we present the cloning, heterologous expression, and detailed functional and pharmacological characterization of two honeybee 5-HT2 receptors. Methods Honeybee 5-HT2 receptor cDNAs were amplified from brain cDNA. Recombinant cell lines were established constitutively expressing receptor variants. Pharmacological properties of the receptors were investigated by Ca2+ imaging experiments. Quantitative PCR was applied to explore the expression patterns of receptor mRNAs. Results The honeybee 5-HT2 receptor class consists of two subtypes, Am5-HT2α and Am5-HT2β. Each receptor gene also gives rise to alternatively spliced mRNAs that possibly code for truncated receptors. Only activation of the full-length receptors with serotonin caused an increase in the intracellular Ca2+ concentration. The effect was mimicked by the agonists 5-methoxytryptamine and 8-OH-DPAT at low micromolar concentrations. Receptor activities were blocked by established 5-HT receptor antagonists such as clozapine, methiothepin, or mianserin. High transcript numbers were detected in exocrine glands suggesting that 5-HT2 receptors participate in secretory processes in the honeybee. Conclusions This study marks the first molecular and pharmacological characterization of two 5-HT2 receptor subtypes in the same insect species. The results presented should facilitate further attempts to unravel central and peripheral effects of serotonin mediated by these receptors. PMID:24324783

  8. 21 CFR 803.42 - If I am an importer, what information must I submit in my individual adverse event reports?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE REPORTING... preexisting medical conditions. (c) Device information (Form 3500A, Block D). You must submit the following... device code (refer to FDA MEDWATCH Medical Device Reporting Code Instructions); (11) Whether a report was...

  9. 21 CFR 803.42 - If I am an importer, what information must I submit in my individual adverse event reports?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE REPORTING... preexisting medical conditions. (c) Device information (Form 3500A, Block D). You must submit the following... device code (refer to FDA MEDWATCH Medical Device Reporting Code Instructions); (11) Whether a report was...

  10. Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Lahmeyer, Charles R. (Inventor)

    1987-01-01

    A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.

  11. Application of a multi-block CFD code to investigate the impact of geometry modeling on centrifugal compressor flow field predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hathaway, M.D.; Wood, J.R.

    1997-10-01

    CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less

  12. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  13. Program EAGLE User’s Manual. Volume 3. Grid Generation Code

    DTIC Science & Technology

    1988-09-01

    15 1. ompps.te Grid Structure ..... .. .................. . 15 2. Block Interfaces ......... ...................... . 18 3. Fundmental ...in principle it is possible to establish a correspondence between any physical region and a single empty rectangular block for general three...differences. Since this second surrounding layer is not involved in the grid generation, no further account will be taken of its presence in the present

  14. Total x-ray power measurements in the Sandia LIGA program.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Michael E.; Ting, Aili

    2005-08-01

    Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power monitors was made at the ALS, beamline 3.3.1. This work showed that a modification of a commercially available, heat flux sensor could result in a simple, direct reading beam power meter that could be a useful for monitoring total X-ray power in Sandia's LIGA exposure stations at the ALS, APS and Stanford Synchrotron Radiation Laboratory (SSRL).« less

  15. Genotypic and phenotypic analysis of 396 individuals with mutations in Sonic Hedgehog.

    PubMed

    Solomon, Benjamin D; Bear, Kelly A; Wyllie, Adrian; Keaton, Amelia A; Dubourg, Christele; David, Veronique; Mercier, Sandra; Odent, Sylvie; Hehr, Ute; Paulussen, Aimee; Clegg, Nancy J; Delgado, Mauricio R; Bale, Sherri J; Lacbawan, Felicitas; Ardinger, Holly H; Aylsworth, Arthur S; Bhengu, Ntombenhle Louisa; Braddock, Stephen; Brookhyser, Karen; Burton, Barbara; Gaspar, Harald; Grix, Art; Horovitz, Dafne; Kanetzke, Erin; Kayserili, Hulya; Lev, Dorit; Nikkel, Sarah M; Norton, Mary; Roberts, Richard; Saal, Howard; Schaefer, G B; Schneider, Adele; Smith, Erika K; Sowry, Ellen; Spence, M Anne; Shalev, Stavit A; Steiner, Carlos E; Thompson, Elizabeth M; Winder, Thomas L; Balog, Joan Z; Hadley, Donald W; Zhou, Nan; Pineda-Alvarez, Daniel E; Roessler, Erich; Muenke, Maximilian

    2012-07-01

    Holoprosencephaly (HPE), the most common malformation of the human forebrain, may result from mutations in over 12 genes. Sonic Hedgehog (SHH) was the first such gene discovered; mutations in SHH remain the most common cause of non-chromosomal HPE. The severity spectrum is wide, ranging from incompatibility with extrauterine life to isolated midline facial differences. To characterise genetic and clinical findings in individuals with SHH mutations. Through the National Institutes of Health and collaborating centres, DNA from approximately 2000 individuals with HPE spectrum disorders were analysed for SHH variations. Clinical details were examined and combined with published cases. This study describes 396 individuals, representing 157 unrelated kindreds, with SHH mutations; 141 (36%) have not been previously reported. SHH mutations more commonly resulted in non-HPE (64%) than frank HPE (36%), and non-HPE was significantly more common in patients with SHH than in those with mutations in the other common HPE related genes (p<0.0001 compared to ZIC2 or SIX3). Individuals with truncating mutations were significantly more likely to have frank HPE than those with non-truncating mutations (49% vs 35%, respectively; p=0.012). While mutations were significantly more common in the N-terminus than in the C-terminus (including accounting for the relative size of the coding regions, p=0.00010), no specific genotype-phenotype correlations could be established regarding mutation location. SHH mutations overall result in milder disease than mutations in other common HPE related genes. HPE is more frequent in individuals with truncating mutations, but clinical predictions at the individual level remain elusive.

  16. BRCA1 sequence variations in 160 individuals referred to a breast/ovarian family cancer clinic. Institut Curie Breast Cancer Group.

    PubMed Central

    Stoppa-Lyonnet, D; Laurent-Puig, P; Essioux, L; Pagès, S; Ithier, G; Ligot, L; Fourquet, A; Salmon, R J; Clough, K B; Pouillart, P; Bonaïti-Pellié, C; Thomas, G

    1997-01-01

    An account of familial aggregation in breast/ovarian cancer has become possible with the identification of BRCA1 germ-line mutations. We evaluated, for 249 individuals registered with the Institut Curie in Paris, the prior probability that an individual carried a mutation that predisposes to these diseases. We chose 160 women for BRCA1 analysis: 103 with a family history of breast cancer and 57 with a family history of breast-ovarian cancer. To detect small mutations, we generated and analyzed 35 overlapping genomic PCR products that cover the coding portion of the gene, by using denaturing gradient gel electrophoresis. Thirty-eight truncating mutations (32 frameshifts, 4 nonsense mutations, and 2 splice variants) were observed in 15% of women with a family history of breast cancer only and in 40% of those with a history of breast-ovarian cancer. Twelve of 25 distinct truncating mutations identified were novel and unique. Most BRCA1 mutations that had been reported more than five times in the Breast Cancer Information Core were present in our series. One mutation (5149del4) observed in two apparently unrelated families most likely originates from a common ancestor. The position of truncating mutations did not significantly affect the ratio of the risk of breast cancer to that of ovarian cancer. In addition, 15 DNA variants (14 missense mutations and 1 neutral mutation) were identified, 9 of which were novel. Indirect evidence suggests that seven of these mutations are deleterious. Images Figure 2 Figure 3 PMID:9150149

  17. Conversion of S–phenylsulfonylcysteine residues to mixed disulfides at pH 4.0: utility in protein thiol blocking and in protein–S–nitrosothiol detection

    PubMed Central

    Reeves, B. D.; Joshi, N.; Campanello, G. C.; Hilmer, J. K.; Chetia, L.; Vance, J. A.; Reinschmidt, J. N.; Miller, C. G.; Giedroc, D. P.; Dratz, E. A.; Singel, D. J.; Grieco, P. A.

    2014-01-01

    A three step protocol for protein S-nitrosothiol conversion to fluorescent mixed disulfides with purified proteins, referred to as the thiosulfonate switch, is explored which involves: 1) thiol blocking at pH 4.0 using S-phenylsulfonylcysteine (SPSC); 2) trapping of protein S-nitrosothiols as their S-phenylsulfonylcysteines employing sodium benzenesulfinate; and 3) tagging the protein thiosulfonate with a fluorescent rhodamine based probe bearing a reactive thiol (Rhod-SH), which forms a mixed disulfide between the probe and the formerly S-nitrosated cysteine residue. S-nitrosated bovine serum albumin and the S-nitrosated C-terminally truncated form of AdhR-SH (alcohol dehydrogenase regulator) designated as AdhR*-SNO were selectively labelled by the thiosulfonate switch both individually and in protein mixtures containing free thiols. This protocol features the facile reaction of thiols with S-phenylsulfonylcysteines forming mixed disulfides at mild acidic pH (pH = 4.0) in both the initial blocking step as well as in the conversion of protein-S-sulfonylcysteines to form stable fluorescent disulfides. Labelling was monitored by TOF-MS and gel electrophoresis. Proteolysis and peptide analysis of the resulting digest identified the cysteine residues containing mixed disulfides bearing the fluorescent probe, Rhod-SH. PMID:24986430

  18. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  19. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  20. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  1. Algorithm 782 : codes for rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    This article describes a suite of codes as well as associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations of dense matrices. The main contribution is an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy and improved versions of the RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang, respectively, We highlight usage and features of these codes.

  2. A Combinatorial Geometry Target Description of the High Mobility Multipurpose Wheeled Vehicle (HMMWV)

    DTIC Science & Technology

    1985-10-01

    NOTE3 1W. KFY OORDS (Continwo =n reverse aide If necesesar aid ldwttlfy by" block ntmber) •JW7 Regions, COM-EOM Region Ident• fication GIFT Material...technique of mobna.tcri• i Geometr- (Com-Geom). The Com-Gem data is used as input to the Geometric Inf• •cation for Targets ( GIFT ) computer code to... GIFT ) 2 3 computer code. This report documents the combinatorial geometry (Com-Geom) target description data which is the input data for the GIFT code

  3. Genetic code, hamming distance and stochastic matrices.

    PubMed

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.

  4. Multi-blocking strategies for the INS3D incompressible Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Gatlin, Boyd

    1990-01-01

    With the continuing development of bigger and faster supercomputers, computational fluid dynamics (CFD) has become a useful tool for real-world engineering design and analysis. However, the number of grid points necessary to resolve realistic flow fields numerically can easily exceed the memory capacity of available computers. In addition, geometric shapes of flow fields, such as those in the Space Shuttle Main Engine (SSME) power head, may be impossible to fill with continuous grids upon which to obtain numerical solutions to the equations of fluid motion. The solution to this dilemma is simply to decompose the computational domain into subblocks of manageable size. Computer codes that are single-block by construction can be modified to handle multiple blocks, but ad-hoc changes in the FORTRAN have to be made for each geometry treated. For engineering design and analysis, what is needed is generalization so that the blocking arrangement can be specified by the user. INS3D is a computer program for the solution of steady, incompressible flow problems. It is used frequently to solve engineering problems in the CFD Branch at Marshall Space Flight Center. INS3D uses an implicit solution algorithm and the concept of artificial compressibility to provide the necessary coupling between the pressure field and the velocity field. The development of generalized multi-block capability in INS3D is described.

  5. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  6. Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure

    NASA Astrophysics Data System (ADS)

    Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori

    In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.

  7. Protograph LDPC Codes for the Erasure Channel

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  8. The Design and Implementation of a Read Prediction Buffer

    DTIC Science & Technology

    1992-12-01

    City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A

  9. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  10. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  11. Synthetic Helizyme Enzymes.

    DTIC Science & Technology

    1987-08-18

    NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP I Synthetic enzymes...chymotrypsin; molecular modeling; 03 peptide synthesis 19. ABSTRACT (Continue on reverse if necessary and identify by block number) The object of this...for AChE. Additionally, synthetic models ofcL- chymotrypsin built using cyclo- dextrins show catalytic activity over a limited pH range.2 Using L

  12. Building Toward an Unmanned Aircraft System Training Strategy

    DTIC Science & Technology

    2014-01-01

    37 4.7. Global Hawk UAS...either trained into a new career field or cross-trained from another Air Force Specialty Code. Those for Global Hawk come from the imagery analyst...Service(s) Capability/Mission rQ-4A Global hawk/ BAMS-D Block 10 9 3 USAF navy ISr Maritime domain awareness (navy) rQ-4B Global hawk Block 20/30 15 3

  13. Anomaly-Based Intrusion Detection Systems Utilizing System Call Data

    DTIC Science & Technology

    2012-03-01

    Functionality Description Persistence mechanism Mimicry technique Camouflage malware image: • renaming its image • appending its image to victim...particular industrial plant . Exactly which one was targeted still remains unknown, however a majority of the attacks took place in Iran [24]. Due... plant to unstable phase and eventually physical damage. It is interesting to note that a particular block of code - block DB8061 is automatically

  14. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  15. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  16. The complete mitochondrial genome of Pauropus longiramus (Myriapoda: Pauropoda): implications on early diversification of the myriapods revealed from comparative analysis.

    PubMed

    Dong, Yan; Sun, Hongying; Guo, Hua; Pan, Da; Qian, Changyuan; Hao, Sijing; Zhou, Kaiya

    2012-08-15

    Myriapods are among the earliest arthropods and may have evolved to become part of the terrestrial biota more than 400 million years ago. A noticeable lack of mitochondrial genome data from Pauropoda hampers phylogenetic and evolutionary studies within the subphylum Myriapoda. We sequenced the first complete mitochondrial genome of a microscopic pauropod, Pauropus longiramus (Arthropoda: Myriapoda), and conducted comprehensive mitogenomic analyses across the Myriapoda. The pauropod mitochondrial genome is a circular molecule of 14,487 bp long and contains the entire set of thirty-seven genes. Frequent intergenic overlaps occurred between adjacent tRNAs, and between tRNA and protein-coding genes. This is the first example of a mitochondrial genome with multiple intergenic overlaps and reveals a strategy for arthropods to effectively compact the mitochondrial genome by overlapping and truncating tRNA genes with neighbor genes, instead of only truncating tRNAs. Phylogenetic analyses based on protein-coding genes provide strong evidence that the sister group of Pauropoda is Symphyla. Additionally, approximately unbiased (AU) tests strongly support the Progoneata and confirm the basal position of Chilopoda in Myriapoda. This study provides an estimation of myriapod origins around 555 Ma (95% CI: 444-704 Ma) and this date is comparable with that of the Cambrian explosion and candidate myriapod-like fossils. A new time-scale suggests that deep radiations during early myriapod diversification occurred at least three times, not once as previously proposed. A Carboniferous origin of pauropods is congruent with the idea that these taxa are derived, rather than basal, progoneatans. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. The influence of state-level policy environments on the activation of the Medicaid SBIRT reimbursement codes.

    PubMed

    Hinde, Jesse; Bray, Jeremy; Kaiser, David; Mallonee, Erin

    2017-02-01

    To examine how institutional constraints, comprising federal actions and states' substance abuse policy environments, influence states' decisions to activate Medicaid reimbursement codes for screening and brief intervention for risky substance use in the United States. A discrete-time duration model was used to estimate the effect of institutional constraints on the likelihood of activating the Medicaid reimbursement codes. Primary constraints included federal Screening, Brief Intervention and Referral to Treatment (SBIRT) grant funding, substance abuse priority, economic climate, political climate and interstate diffusion. Study data came from publicly available secondary data sources. Federal SBIRT grant funding did not affect significantly the likelihood of activation (P = 0.628). A $1 increase in per-capita block grant funding was associated with a 10-percentage point reduction in the likelihood of activation (P = 0.003) and a $1 increase in per-capita state substance use disorder expenditures was associated with a 2-percentage point increase in the likelihood of activation (P = 0.004). States with enacted parity laws (P = 0.016) and a Democratic-controlled state government were also more likely to activate the codes. In the United States, the determinants of state activation of Medicaid Screening, Brief Intervention and Referral to Treatment (SBIRT) reimbursement codes are complex, and include more than financial considerations. Federal block grant funding is a strong disincentive to activating the SBIRT reimbursement codes, while more direct federal SBIRT grant funding has no detectable effects. © 2017 Society for the Study of Addiction.

  18. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  19. SimITK: visual programming of the ITK image-processing library within Simulink.

    PubMed

    Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin

    2014-04-01

    The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.

  20. A Ka-Band (26 GHz) Circularly Polarized 2x2 Microstrip Patch Sub-Array with Compact Feed

    NASA Technical Reports Server (NTRS)

    Chrysler, Andrew; Furse, Cynthia; Simons, Rainee N.; Miranda, Felix A.

    2017-01-01

    A Ka-Band (26 gigahertz) 2 by 2 sub-array with square-shaped microstrip patch antenna elements having two truncated corners for circular polarization (CP) is presented. In addition, the layout for a new compact microstrip feed network for the sub-array is also presented. The compact feed network offers a footprint size reduction of near 60 percent over traditional sub-array at 26 gigahertz. Experimental data indicates that a truncation amount a equals 0.741 millimeters for an isolated patch element results in a return loss (S (sub II)) of minus 35 decibels at 26.3 gigahertz. Furthermore, the measured S (sub II) for the proof-of-concept sub-array with the above elements is better than minus 10.0 decibels at 27.7 gigahertz. However, the impedance match and the operating frequency can be fine-tuned to 26 gigahertz by adjusting the feed network dimensions. Lastly, good agreement is observed between the measured and simulated S (sub II) for the subarray for both right hand and left hand CP. The goal of this effort is utilize the above sub-array as a building block for a larger N by N element array, which would serve as a feed for a reflector antenna for satellite communications.

  1. Truncation of the TAR DNA-binding protein 43 is not a prerequisite for cytoplasmic relocalization, and is suppressed by caspase inhibition and by introduction of the A90V sequence variant

    PubMed Central

    Brandon, Nicholas J.; Moss, Stephen J.

    2017-01-01

    The RNA-binding and -processing protein TAR DNA-binding protein 43 (TDP-43) is heavily linked to the underlying causes and pathology of neurodegenerative diseases such as amyotrophic lateral sclerosis and frontotemporal lobar degeneration. In these diseases, TDP-43 is mislocalized, hyperphosphorylated, ubiquitinated, aggregated and cleaved. The importance of TDP-43 cleavage in the disease pathogenesis is still poorly understood. Here we detail the use of D-sorbitol as an exogenous stressor that causes TDP-43 cleavage in HeLa cells, resulting in a 35 kDa truncated product that accumulates in the cytoplasm within one hour of treatment. We confirm that the formation of this 35 kDa cleavage product is mediated by the activation of caspases. Inhibition of caspases blocks the cleavage of TDP-43, but does not prevent the accumulation of full-length protein in the cytoplasm. Using D-sorbitol as a stressor and caspase activator, we also demonstrate that the A90V variant of TDP-43, which lies adjacent to the caspase cleavage site within the nuclear localization sequence of TDP-43, confers partial resistance against caspase-mediated generation of the 35 kDa cleavage product. PMID:28510586

  2. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  3. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  4. Expression, purification and characterisation of two variant cysteine peptidases from Trypanosoma congolense with active site substitutions.

    PubMed

    Pillay, Davita; Boulangé, Alain F; Coetzer, Theresa H T

    2010-12-01

    Congopain, the major cysteine peptidase of Trypanosoma congolense is an attractive candidate for an anti-disease vaccine and target for the design of specific inhibitors. A complicating factor for the inclusion of congopain in a vaccine is that multiple variants of congopain are present in the genome of the parasite. In order to determine whether the variant congopain-like genes code for peptidases with enzymatic activities different to those of congopain, two variants were cloned and expressed. Two truncated catalytic domain variants were recombinantly expressed in Pichia pastoris. The two expressed catalytic domain variants differed slightly from one another in substrate preferences and also from that of C2 (the recombinant truncated form of congopain). Surprisingly, a variant with the catalytic triad Ser(25), His(159) and Asn(175) was shown to be active against classical cysteine peptidase substrates and inhibited by E-64, a class-specific cysteine protease inhibitor. Both catalytic domain clones and C2 had pH optima of either 6.0 or 6.5 implying that these congopain-like proteases are likely to be expressed and active in the bloodstream of the host animal. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. A New Distribution Family for Microarray Data †

    PubMed Central

    Kelmansky, Diana Mabel; Ricci, Lila

    2017-01-01

    The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative standpoint taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. R codes are available from the authors upon request. PMID:28208652

  6. Structure of Lmaj006129AAA, a hypothetical protein from Leishmania major

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arakaki, Tracy; Le Trong, Isolde; Structural Genomics of Pathogenic Protozoa

    2006-03-01

    The crystal structure of a conserved hypothetical protein from L. major, Pfam sequence family PF04543, structural genomics target ID Lmaj006129AAA, has been determined at a resolution of 1.6 Å. The gene product of structural genomics target Lmaj006129 from Leishmania major codes for a 164-residue protein of unknown function. When SeMet expression of the full-length gene product failed, several truncation variants were created with the aid of Ginzu, a domain-prediction method. 11 truncations were selected for expression, purification and crystallization based upon secondary-structure elements and disorder. The structure of one of these variants, Lmaj006129AAH, was solved by multiple-wavelength anomalous diffraction (MAD)more » using ELVES, an automatic protein crystal structure-determination system. This model was then successfully used as a molecular-replacement probe for the parent full-length target, Lmaj006129AAA. The final structure of Lmaj006129AAA was refined to an R value of 0.185 (R{sub free} = 0.229) at 1.60 Å resolution. Structure and sequence comparisons based on Lmaj006129AAA suggest that proteins belonging to Pfam sequence families PF04543 and PF01878 may share a common ligand-binding motif.« less

  7. Overexpression, purification, and characterization of SHPTP1, a Src homology 2-containing protein-tyrosine-phosphatase.

    PubMed Central

    Pei, D; Neel, B G; Walsh, C T

    1993-01-01

    A protein-tyrosine-phosphatase (PTPase; EC 3.1.3.48) containing two Src homology 2 (SH2) domains, SHPTP1, was previously identified in hematopoietic and epithelial cells. By placing the coding sequence of the PTPase behind a bacteriophage T7 promoter, we have overexpressed both the full-length enzyme and a truncated PTPase domain in Escherichia coli. In each case, the soluble enzyme was expressed at levels of 3-4% of total soluble E. coli protein. The recombinant proteins had molecular weights of 63,000 and 45,000 for the full-length protein and the truncated PTPase domain, respectively, as determined by SDS/PAGE. The recombinant enzymes dephosphorylated p-nitrophenyl phosphate, phosphotyrosine, and phosphotyrosyl peptides but not phosphoserine, phosphothreonine, or phosphoseryl peptides. The enzymes showed a strong dependence on pH and ionic strength for their activity, with pH optima of 5.5 and 6.3 for the full-length enzyme and the catalytic domain, respectively, and an optimal NaCl concentration of 250-300 mM. The recombinant PTPases had high Km values for p-nitrophenyl phosphate and exhibited non-Michaelis-Menten kinetics for phosphotyrosyl peptides. Images PMID:8430079

  8. A Noninvasive In Vitro Monitoring System Reporting Skeletal Muscle Differentiation.

    PubMed

    Öztürk-Kaloglu, Deniz; Hercher, David; Heher, Philipp; Posa-Markaryan, Katja; Sperger, Simon; Zimmermann, Alice; Wolbank, Susanne; Redl, Heinz; Hacobian, Ara

    2017-01-01

    Monitoring of cell differentiation is a crucial aspect of cell-based therapeutic strategies depending on tissue maturation. In this study, we have developed a noninvasive reporter system to trace murine skeletal muscle differentiation. Either a secreted bioluminescent reporter (Metridia luciferase) or a fluorescent reporter (green fluorescent protein [GFP]) was placed under the control of the truncated muscle creatine kinase (MCK) basal promoter enhanced by variable numbers of upstream MCK E-boxes. The engineered pE3MCK vector, coding a triple tandem of E-Boxes and the truncated MCK promoter, showed twentyfold higher levels of luciferase activation compared with a Cytomegalovirus (CMV) promoter. This newly developed reporter system allowed noninvasive monitoring of myogenic differentiation in a straining bioreactor. Additionally, binding sequences of endogenous microRNAs (miRNAs; seed sequences) that are known to be downregulated in myogenesis were ligated as complementary seed sequences into the reporter vector to reduce nonspecific signal background. The insertion of seed sequences improved the signal-to-noise ratio up to 25% compared with pE3MCK. Due to the highly specific, fast, and convenient expression analysis for cells undergoing myogenic differentiation, this reporter system provides a powerful tool for application in skeletal muscle tissue engineering.

  9. Genome defense against exogenous nucleic acids in eukaryotes by non-coding DNA occurs through CRISPR-like mechanisms in the cytosol and the bodyguard protection in the nucleus.

    PubMed

    Qiu, Guo-Hua

    2016-01-01

    In this review, the protective function of the abundant non-coding DNA in the eukaryotic genome is discussed from the perspective of genome defense against exogenous nucleic acids. Peripheral non-coding DNA has been proposed to act as a bodyguard that protects the genome and the central protein-coding sequences from ionizing radiation-induced DNA damage. In the proposed mechanism of protection, the radicals generated by water radiolysis in the cytosol and IR energy are absorbed, blocked and/or reduced by peripheral heterochromatin; then, the DNA damage sites in the heterochromatin are removed and expelled from the nucleus to the cytoplasm through nuclear pore complexes, most likely through the formation of extrachromosomal circular DNA. To strengthen this hypothesis, this review summarizes the experimental evidence supporting the protective function of non-coding DNA against exogenous nucleic acids. Based on these data, I hypothesize herein about the presence of an additional line of defense formed by small RNAs in the cytosol in addition to their bodyguard protection mechanism in the nucleus. Therefore, exogenous nucleic acids may be initially inactivated in the cytosol by small RNAs generated from non-coding DNA via mechanisms similar to the prokaryotic CRISPR-Cas system. Exogenous nucleic acids may enter the nucleus, where some are absorbed and/or blocked by heterochromatin and others integrate into chromosomes. The integrated fragments and the sites of DNA damage are removed by repetitive non-coding DNA elements in the heterochromatin and excluded from the nucleus. Therefore, the normal eukaryotic genome and the central protein-coding sequences are triply protected by non-coding DNA against invasion by exogenous nucleic acids. This review provides evidence supporting the protective role of non-coding DNA in genome defense. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  11. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  12. Coding and decoding for code division multiple user communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  13. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899

  14. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  15. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  16. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  17. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  18. Computational Study on the Inhibitor Binding Mode and Allosteric Regulation Mechanism in Hepatitis C Virus NS3/4A Protein

    PubMed Central

    Xue, Weiwei; Yang, Ying; Wang, Xiaoting; Liu, Huanxiang; Yao, Xiaojun

    2014-01-01

    HCV NS3/4A protein is an attractive therapeutic target responsible for harboring serine protease and RNA helicase activities during the viral replication. Small molecules binding at the interface between the protease and helicase domains can stabilize the closed conformation of the protein and thus block the catalytic function of HCV NS3/4A protein via an allosteric regulation mechanism. But the detailed mechanism remains elusive. Here, we aimed to provide some insight into the inhibitor binding mode and allosteric regulation mechanism of HCV NS3/4A protein by using computational methods. Four simulation systems were investigated. They include: apo state of HCV NS3/4A protein, HCV NS3/4A protein in complex with an allosteric inhibitor and the truncated form of the above two systems. The molecular dynamics simulation results indicate HCV NS3/4A protein in complex with the allosteric inhibitor 4VA adopts a closed conformation (inactive state), while the truncated apo protein adopts an open conformation (active state). Further residue interaction network analysis suggests the communication of the domain-domain interface play an important role in the transition from closed to open conformation of HCV NS3/4A protein. However, the inhibitor stabilizes the closed conformation through interaction with several key residues from both the protease and helicase domains, including His57, Asp79, Asp81, Asp168, Met485, Cys525 and Asp527, which blocks the information communication between the functional domains interface. Finally, a dynamic model about the allosteric regulation and conformational changes of HCV NS3/4A protein was proposed and could provide fundamental insights into the allosteric mechanism of HCV NS3/4A protein function regulation and design of new potent inhibitors. PMID:24586263

  19. C-TERMINAL FRAGMENT OF TRANSFORMING GROWTH FACTOR BETA-INDUCED PROTEIN (TGFBIp) IS REQUIRED FOR APOPTOSIS IN HUMAN OSTEOSARCOMA CELLS

    PubMed Central

    Zamilpa, Rogelio; Rupaimoole, Rajesha; Phelix, Clyde F.; Somaraki-Cormier, Maria; Haskins, William; Asmis, Reto; LeBaron, Richard G.

    2009-01-01

    Transforming growth factor beta induced protein (TGFBIp), is secreted into the extracellular space. When fragmentation of C-terminal portions is blocked, apoptosis is low, even when the protein is overexpressed. If fragmentation occurs, apoptosis is observed. Whether full-length TGFBIp or integrin-binding fragments released from its C-terminus is necessary for apoptosis remains equivocal. More importantly, the exact portion of the C-terminus that conveys the pro-apoptotic property of TGFBIp is uncertain. It is reportedly within the final 166 amino acids. We sought to determine if this property is dependent upon the final 69 amino acids containing the integrin-binding, EPDIM and RGD, sequences. With MG-63 osteosarcoma cells, transforming growth factor (TGF)-β1 treatment increased expression of TGFBIp over 72 hours (p<0.001). At this time point, apoptosis was significantly increased (p<0.001) and was prevented by an anti-TGFBIp, polyclonal antibody (p<0.05). Overexpression of TGFBIp by transient transfection produced a 2-fold increase in apoptosis (p<0.01). Exogenous purified TGFBIp at concentrations of 37 to 150 nM produced a dose dependent increase in apoptosis (p<0.001). Mass spectrometry analysis of TGFBIp isolated from conditioned medium of cells treated with TGF-β1 revealed truncated forms of TGFBIp that lacked integrin-binding sequences in the C-terminus. Recombinant TGFBIp truncated, similarly, at amino acid 614 failed to induce apoptosis. A recombinant fragment encoding the final 69 amino acids of the TGFBIp C-terminus produced significant apoptosis. This apoptosis level was comparable to that induced by TGF-β1 upregulation of endogenous TGFBIp. Mutation of the integrin-binding sequence EPDIM, but not RGD, blocked apoptosis (p<0.001). These pro-apoptotic actions are dependent on the C-terminus most likely to interact with integrins. PMID:19505574

  20. A Plasmodium falciparum 48/45 single epitope R0.6C subunit protein elicits high levels of transmission blocking antibodies.

    PubMed

    Singh, Susheel K; Roeffen, Will; Andersen, Gorm; Bousema, Teun; Christiansen, Michael; Sauerwein, Robert; Theisen, Michael

    2015-04-15

    The sexual stage Pfs48/45 antigen is a well-established lead candidate for a transmission blocking (TB) vaccine because of its critical role in parasite fertilization. We have recently produced the carboxy-terminal 10C-fragment of Pfs48/45 containing three known epitopes for TB antibodies as a chimera with the N-terminal region of GLURP (R0). The resulting fusion protein elicited high titer TB antibodies in rodents. To increase the relatively low yield of correctly folded Pfs48/45 we have generated a series of novel chimera truncating the 10C-fragments to 6 cysteine residues containing sub-units (6C). All constructs harbor the major epitope I for TB antibodies. One of these sub-units (R0.6Cc), produced high yields of correctly folded conformers, which could be purified by a simple 2-step procedure. Purified R0.6Cc was stable and elicits high titer TB antibodies in rats. The yield, purity and stability of R0.6Cc allows for further clinical development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Tissue Distribution, Excretion, and Hepatic Biotransformation of Microcystin-LR in Mice

    DTIC Science & Technology

    1990-07-09

    TO 900709 43 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD...GROUP SUB-GROUP Imicrocystin-LR, pharmacokinetics, biotransformation, protein binding 19, ABSTRACT (Continue on reverse if necessary and identify by block...the column measured with blue dextrin . Fig. 6. Econo-Pac 1ODG desalting column profile of hepatic- cytosolic radiolabel under denaturing conditions

  2. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  3. PELEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.

  4. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  5. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-01-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  6. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-08-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  7. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    NASA Astrophysics Data System (ADS)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  8. Expansion and improvements of the FORMA system for response and load analysis. Volume 1: Programming manual

    NASA Technical Reports Server (NTRS)

    Wohlen, R. L.

    1976-01-01

    Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.

  9. Combinatorics associated with inflections and bitangents of plane quartics

    NASA Astrophysics Data System (ADS)

    Gizatullin, M. Kh

    2013-08-01

    After a preliminary survey and a description of some small Steiner systems from the standpoint of the theory of invariants of binary forms, we construct a binary Golay code (of length 24) using ideas from J. Grassmann's thesis of 1875. One of our tools is a pair of disjoint Fano planes. Another application of such pairs and properties of plane quartics is a construction of a new block design on 28 objects. This block design is a part of a dissection of the set of 288 Aronhold sevens. The dissection distributes the Aronhold sevens into 8 disjoint block designs of this type.

  10. Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1989-01-01

    The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.

  11. The Correlation Between Subjective and Objective Measures of Coded Speech Quality and Intelligibility Following Noise Corruption

    DTIC Science & Technology

    1981-12-01

    VALUES OF EACH BLOCK C TO BE PRINTED. C C ASTORE - 256 VALUE REAL ARRAY USED TO C STORE THE CONVERTED VOLTAGES C FROM ISTORE C C SBLK- STARTING BLOCK...BETWEEN -5.00 C VOLTS AND +5.00 VOLTS. C ~c INTEGER IFILE(13),SBLK,CBLK,ISTORE(256),ST(22), " IBLOCKSJFILE(13),EBLK t :- C REAL ASTORE (256) C C ENTER...CONVERT EACH BLOCK TO BE PRINTED INTO VOLTAGES AND C STORE IN THE ARRAY ASTORE . WRITE ASTORE INTO THE C FILE NAMED BY JFILE C C DO 60 I=1,256 ASTORE (I

  12. Image compression using quad-tree coding with morphological dilation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei

    2007-11-01

    In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.

  13. Programmed optoelectronic time-pulse coded relational processor as base element for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.

    2007-04-01

    In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.

  14. Deletion of Brca2 exon 27 causes hypersensitivity to DNA crosslinks, chromosomal instability, and reduced life span in mice

    NASA Technical Reports Server (NTRS)

    Donoho, Greg; Brenneman, Mark A.; Cui, Tracy X.; Donoviel, Dorit; Vogel, Hannes; Goodwin, Edwin H.; Chen, David J.; Hasty, Paul

    2003-01-01

    The Brca2 tumor-suppressor gene contributes to genomic stability, at least in part by a role in homologous recombinational repair. BRCA2 protein is presumed to function in homologous recombination through interactions with RAD51. Both exons 11 and 27 of Brca2 code for domains that interact with RAD51; exon 11 encodes eight BRC motifs, whereas exon 27 encodes a single, distinct interaction domain. Deletion of all RAD51-interacting domains causes embryonic lethality in mice. A less severe phenotype is seen with BRAC2 truncations that preserve some, but not all, of the BRC motifs. These mice can survive beyond weaning, but are runted and infertile, and die very young from cancer. Cells from such mice show hypersensitivity to some genotoxic agents and chromosomal instability. Here, we have analyzed mice and cells with a deletion of only the RAD51-interacting region encoded by exon 27. Mice homozygous for this mutation (called brca2(lex1)) have a shorter life span than that of control littermates, possibly because of early onsets of cancer and sepsis. No other phenotype was observed in these animals; therefore, the brca2(lex1) mutation is less severe than truncations that delete some BRC motifs. However, at the cellular level, the brca2(lex1) mutation causes reduced viability, hypersensitivity to the DNA interstrand crosslinking agent mitomycin C, and gross chromosomal instability, much like more severe truncations. Thus, the extreme carboxy-terminal region encoded by exon 27 is important for BRCA2 function, probably because it is required for a fully functional interaction between BRCA2 and RAD51. Copyright 2003 Wiley-Liss, Inc.

  15. The bZIP Transcription Factor Fgap1 Mediates Oxidative Stress Response and Trichothecene Biosynthesis But Not Virulence in Fusarium graminearum

    PubMed Central

    Montibus, Mathilde; Ducos, Christine; Bonnin-Verdal, Marie-Noelle; Bormann, Jorg; Ponts, Nadia; Richard-Forget, Florence; Barreau, Christian

    2013-01-01

    Redox sensing is of primary importance for fungi to cope with oxidant compounds found in their environment. Plant pathogens are particularly subject to the oxidative burst during the primary steps of infection. In the budding yeast Saccharomyces cerevisiae, it is the transcription factor Yap1 that mediates the response to oxidative stress via activation of genes coding for detoxification enzymes. In the cereal pathogen Fusarium graminearum, Fgap1 a homologue of Yap1 was identified and its role was investigated. During infection, this pathogen produces mycotoxins belonging to the trichothecenes family that accumulate in the grains. The global regulation of toxin biosynthesis is not completely understood. However, it is now clearly established that an oxidative stress activates the production of toxins by F. graminearum. The involvement of Fgap1 in this activation was investigated. A deleted mutant and a strain expressing a truncated constitutive form of Fgap1 were constructed. None of the mutants was affected in pathogenicity. The deleted mutant showed higher level of trichothecenes production associated with overexpression of Tri genes. Moreover activation of toxin accumulation in response to oxidative stress was no longer observed. Regarding the mutant with the truncated constitutive form of Fgap1, toxin production was strongly reduced. Expression of oxidative stress response genes was not activated in the deleted mutant and expression of the gene encoding the mitochondrial superoxide dismutase MnSOD1 was up-regulated in the mutant with the truncated constitutive form of Fgap1. Our results demonstrate that Fgap1 plays a key role in the link between oxidative stress response and F. graminearum secondary metabolism. PMID:24349499

  16. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  17. Residential exposure to aircraft noise and hospital admissions for cardiovascular diseases: multi-airport retrospective study.

    PubMed

    Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven; Dominici, Francesca

    2013-10-08

    To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥ 65 years) residing near airports. Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. 2218 zip codes surrounding 89 airports in the contiguous states. 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥ 65 years) residing near airports in 2009. Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports.

  18. Residential exposure to aircraft noise and hospital admissions for cardiovascular diseases: multi-airport retrospective study

    PubMed Central

    Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven

    2013-01-01

    Objective To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥65 years) residing near airports. Design Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. Setting 2218 zip codes surrounding 89 airports in the contiguous states. Participants 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥65 years) residing near airports in 2009. Main outcome measures Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Results Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Conclusions Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports. PMID:24103538

  19. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  20. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    PubMed

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.

  1. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  2. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  3. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  4. Unusual seismogenic soft-sediment deformation structures in Cambrian epicratonic carbonate deposits, western Colorado, U.S.A

    NASA Astrophysics Data System (ADS)

    Myrow, P.; Chen, J.

    2013-12-01

    A wide variety of unusual penecontemporaneous deformation structures exist in grainstone and flat-pebble conglomerate beds of the Upper Cambrian strata, western Colorado, including slide scarps, thrusted beds, irregular blocks and internally deformed beds. Slide scarps are characterized by concave-up, sharp surfaces that truncate one or more underlying beds. Thrusted beds record movement of a part of a bed onto itself along a moderate to steeply inclined (generally 25°-40°) ramp. The hanging wall lenses in cases show fault-bend geometries, with either intact or mildly deformed bedding. Irregular bedded to internally deformed blocks isolated on generally flat upper bedding surfaces are similar in composition to the underlying beds. These features represent parts of beds that were detached, moved up onto, and some distances across, the laterally adjacent undisturbed bed surfaces. The blocks moved either at the sediment-water interface or intrastratally at shallow depths within overlying muddy deposits. Finally, internally deformed beds have large blocks, fitted fabrics of highly irregular fragments, and contorted lamination, which represent heterogeneous deformation, such as brecciation and liquefaction. The various deformation structures were most probably triggered by earthquakes, considering the nature of deformation (regional distribution of liquefaction structures, and the brittle segmentation and subsequent transportation of semi-consolidated beds) and the reactivation of Mesoproterozoic, crustal-scale shear zones in the central Rockies during the Late Cambrian. Features produced by initial brittle deformation are unusual relative to most reported seismites, and may represent poorly recognized to unrecognized seismogenic structures in the rock record.

  5. Finite-block-length analysis in classical and quantum information theory.

    PubMed

    Hayashi, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.

  6. Finite-block-length analysis in classical and quantum information theory

    PubMed Central

    HAYASHI, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962

  7. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  8. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  9. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  10. The Antimicrobial Effects of Various Nutrient Electrolyte Beverages

    DTIC Science & Technology

    1986-05-01

    SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if n ecessary and identify by block number) FIELD GROUP SUB-GROUP...reverse if necessary and identify by block number) The purpose of this study was to determine if Staphylococcus aureus, Saccharomy ces cerevisiae...sodium benzoate, and malta- dextrin ); inoculated with A. flavus was incubat ed for various time periods at 30°C~ b Cell volumes {mL) were obtained as

  11. Synthesis of ganglioside epitopes for oligosaccharide specific immunoadsorption therapy of Guillian-Barré syndrome.

    PubMed

    Andersen, Søren M; Ling, Chang-Chun; Zhang, Ping; Townson, Kate; Willison, Hugh J; Bundle, David R

    2004-04-21

    Guillain-Barré syndrome is a postinfectious, autoimmune neuropathy resulting in neuromuscular paralysis. Auto-antibodies, often induced by bacterial infection, bind to human gangliosides possessing monosialoside and diasialoside epitopes and impair the function of nerve junctions, where these ganglioside structures are highly enriched. Truncated gangliosides representive of GD3, GQ1b and GM2 epitopes have been synthesized as methyl glycosides and as a glycosides of an eleven carbon tether. The synthetic oligosaccharide ligands are structural mimics of these highly complex ganglioside epitopes and via their ability to neutralize or remove auto-antibodies have the potential for therapy, either as soluble blocking ligands administered systemically, or as immuno-affinity ligands for use as extracorporeal immunoadsorbents.

  12. Software Library: A Reusable Software Issue.

    DTIC Science & Technology

    1984-06-01

    On reverse aide it neceeary aid Identify by block number) Software Library; Program Library; Reusability; Generator 20 ABSTRACT (Cmlnue on revere... Software Library. A particular example of the Software Library, the Program Library, is described as a prototype of a reusable library. A hierarchical... programming libraries are described. Finally, non code products in the Software Library are discussed. Accesson Fo NTIS R~jS DrrC TA Availability Codes 0

  13. Programming in HAL/S

    NASA Technical Reports Server (NTRS)

    Ryer, M. J.

    1978-01-01

    HAL/S is a computer programming language; it is a representation for algorithms which can be interpreted by either a person or a computer. HAL/S compilers transform blocks of HAL/S code into machine language which can then be directly executed by a computer. When the machine language is executed, the algorithm specified by the HAL/S code (source) is performed. This document describes how to read and write HAL/S source.

  14. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  15. Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825

    NASA Astrophysics Data System (ADS)

    Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.

    2010-11-01

    We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.

  16. Helioseismic Constraints on New Solar Models from the MoSEC Code

    NASA Technical Reports Server (NTRS)

    Elliott, J. R.

    1998-01-01

    Evolutionary solar models are computed using a new stellar evolution code, MOSEC (Modular Stellar Evolution Code). This code has been designed with carefully controlled truncation errors in order to achieve a precision which reflects the increasingly accurate determination of solar interior structure by helioseismology. A series of models is constructed to investigate the effects of the choice of equation of state (OPAL or MHD-E, the latter being a version of the MHD equation of state recalculated by the author), the inclusion of helium and heavy-element settling and diffusion, and the inclusion of a simple model of mixing associated with the solar tachocline. The neutrino flux predictions are discussed, while the sound speed of the computed models is compared to that of the sun via the latest inversion of SOI-NMI p-mode frequency data. The comparison between models calculated with the OPAL and MHD-E equations of state is particularly interesting because the MHD-E equation of state includes relativistic effects for the electrons, whereas neither MHD nor OPAL do. This has a significant effect on the sound speed of the computed model, worsening the agreement with the solar sound speed. Using the OPAL equation of state and including the settling and diffusion of helium and heavy elements produces agreement in sound speed with the helioseismic results to within about +.-0.2%; the inclusion of mixing slightly improves the agreement.

  17. Metabolic engineering for high glycerol production by the anaerobic cultures of Saccharomyces cerevisiae.

    PubMed

    Semkiv, Marta V; Dmytruk, Kostyantyn V; Abbas, Charles A; Sibirny, Andriy A

    2017-06-01

    Glycerol is used by the cosmetic, paint, automotive, food, and pharmaceutical industries and for production of explosives. Currently, glycerol is available in commercial quantities as a by-product from biodiesel production, but the purity and the cost of its purification are prohibitive. The industrial production of glycerol by glucose aerobic fermentation using osmotolerant strains of the yeasts Candida sp. and Saccharomyces cerevisiae has been described. A major drawback of the aerobic process is the high cost of production. For this reason, the development of yeast strains that effectively convert glucose to glycerol anaerobically is of great importance. Due to its ability to grow under anaerobic conditions, the yeast S. cerevisiae is an ideal system for the development of this new biotechnological platform. To increase glycerol production and accumulation from glucose, we lowered the expression of TPI1 gene coding for triose phosphate isomerase; overexpressed the fused gene consisting the GPD1 and GPP2 parts coding for glycerol-3-phosphate dehydrogenase and glycerol-3-phosphate phosphatase, respectively; overexpressed the engineered FPS1 gene that codes for aquaglyceroporin; and overexpressed the truncated gene ILV2 that codes for acetolactate synthase. The best constructed strain produced more than 20 g of glycerol/L from glucose under micro-aerobic conditions and 16 g of glycerol/L under anaerobic conditions. The increase in glycerol production led to a drop in ethanol and biomass accumulation.

  18. Novel modes and adaptive block scanning order for intra prediction in AV1

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz

    2017-09-01

    The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.

  19. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul

    2015-11-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.

  20. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  1. Development of V/STOL methodology based on a higher order panel method

    NASA Technical Reports Server (NTRS)

    Bhateley, I. C.; Howell, G. A.; Mann, H. W.

    1983-01-01

    The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.

  2. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  3. Representation of deformable motion for compression of dynamic cardiac image data

    NASA Astrophysics Data System (ADS)

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2012-02-01

    We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.

  4. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  5. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  6. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  7. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  8. Gibraltar v 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CURRY, MATTHEW LEON; WARD, H. LEE; & SKJELLUM, ANTHONY

    Gibraltar is a library and associated test suite which performs Reed-Solomon coding and decoding of data buffers using graphics processing units which support NVIDIA's CUDA technology. This library is used to generate redundant data allowing for recovery of lost information. For example, a user can generate m new blocks of data from n original blocks, distributing those pieces over n+m devices. If any m devices fail, the contents of those devices can be recovered from the contents of the other n devices, even if some of the original blocks are lost. This is a generalized description of RAID, a techniquemore » for increasing data storage speed and size.« less

  9. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  10. Small RNA populations revealed by blocking rRNA fragments in Drosophila melanogaster reproductive tissues

    PubMed Central

    Dalmay, Tamas

    2018-01-01

    RNA interference (RNAi) is a complex and highly conserved regulatory mechanism mediated via small RNAs (sRNAs). Recent technical advances in high throughput sequencing have enabled an increasingly detailed analysis of sRNA abundances and profiles in specific body parts and tissues. This enables investigations of the localized roles of microRNAs (miRNAs) and small interfering RNAs (siRNAs). However, variation in the proportions of non-coding RNAs in the samples being compared can hinder these analyses. Specific tissues may vary significantly in the proportions of fragments of longer non-coding RNAs (such as ribosomal RNA or transfer RNA) present, potentially reflecting tissue-specific differences in biological functions. For example, in Drosophila, some tissues contain a highly abundant 30nt rRNA fragment (the 2S rRNA) as well as abundant 5’ and 3’ terminal rRNA fragments. These can pose difficulties for the construction of sRNA libraries as they can swamp the sequencing space and obscure sRNA abundances. Here we addressed this problem and present a modified “rRNA blocking” protocol for the construction of high-definition (HD) adapter sRNA libraries, in D. melanogaster reproductive tissues. The results showed that 2S rRNAs targeted by blocking oligos were reduced from >80% to < 0.01% total reads. In addition, the use of multiple rRNA blocking oligos to bind the most abundant rRNA fragments allowed us to reveal the underlying sRNA populations at increased resolution. Side-by-side comparisons of sequencing libraries of blocked and non-blocked samples revealed that rRNA blocking did not change the miRNA populations present, but instead enhanced their abundances. We suggest that this rRNA blocking procedure offers the potential to improve the in-depth analysis of differentially expressed sRNAs within and across different tissues. PMID:29474379

  11. A platform-independent method to reduce CT truncation artifacts using discriminative dictionary representations.

    PubMed

    Chen, Yang; Budde, Adam; Li, Ke; Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong

    2017-01-01

    When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of the patient, or the patient needs to be positioned partially outside the SFOV for certain clinical applications, truncation artifacts often appear in the reconstructed CT images. Many truncation artifact correction methods perform extrapolations of the truncated projection data based on certain a priori assumptions. The purpose of this work was to develop a novel CT truncation artifact reduction method that directly operates on DICOM images. The blooming of pixel values associated with truncation was modeled using exponential decay functions, and based on this model, a discriminative dictionary was constructed to represent truncation artifacts and nonartifact image information in a mutually exclusive way. The discriminative dictionary consists of a truncation artifact subdictionary and a nonartifact subdictionary. The truncation artifact subdictionary contains 1000 atoms with different decay parameters, while the nonartifact subdictionary contains 1000 independent realizations of Gaussian white noise that are exclusive with the artifact features. By sparsely representing an artifact-contaminated CT image with this discriminative dictionary, the image was separated into a truncation artifact-dominated image and a complementary image with reduced truncation artifacts. The artifact-dominated image was then subtracted from the original image with an appropriate weighting coefficient to generate the final image with reduced artifacts. This proposed method was validated via physical phantom studies and retrospective human subject studies. Quantitative image evaluation metrics including the relative root-mean-square error (rRMSE) and the universal image quality index (UQI) were used to quantify the performance of the algorithm. For both phantom and human subject studies, truncation artifacts at the peripheral region of the SFOV were effectively reduced, revealing soft tissue and bony structure once buried in the truncation artifacts. For the phantom study, the proposed method reduced the relative RMSE from 15% (original images) to 11%, and improved the UQI from 0.34 to 0.80. A discriminative dictionary representation method was developed to mitigate CT truncation artifacts directly in the DICOM image domain. Both phantom and human subject studies demonstrated that the proposed method can effectively reduce truncation artifacts without access to projection data. © 2016 American Association of Physicists in Medicine.

  12. Supplemental Cultural Resources Investigations and Site Testing for the Pointe Coupee to Arbroth Levee Enlargement and Seepage Control Project, West Baton Rouge Parish, Louisiana

    DTIC Science & Technology

    1993-07-01

    July 316 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Bayou Plantation...Archeology Tenements Yatton Plantation 19. ABSTRACT (Continue on reverse if necessary and identify by block number) This report presents the results of Phase 1... Revetment Projects, M-270.2 to 246.0-R ........................................................... 21 Previously Recorded Archeological Sites in the

  13. Role of Interfaces and Interphases in the Evolution Mechanics of Material Systems

    DTIC Science & Technology

    1992-03-26

    K. REIFSNIDER, W. STINCHCOMB, D. DILLARD, R. SWAIN, K. JAYARAMAN, Y. CHlANG J. LESKO, M. ELAHI, Z. GAO, A. RAZVAN Nlatcrials Response Group 92- 12953...1/91 1 26 March 1992 SUPPLEMENTARY NOTATION COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP ...SUB- GROUP ABSTRACT (Continue on reverse if necessary and identify by block number) This final report summarizes the activities conducted under this

  14. Design and Implementation of a CMOS Chip for a Prolog

    DTIC Science & Technology

    1988-03-01

    generation scheme . We use the P -circuit [9] with pre-conditioning and post- conditioning 12,3] circuits to generate the carry. The implementation of...system generates vertical microcode for a general purpose processor, the NCR 9300 sys- S tem, from W- code [7]. Three significant pieces of software are...calculation block generating the pro- pagate ( P ) and generate (G) signals needed for carry calculation, and a sum block supplying the final result. The top

  15. Genes involved in androgen biosynthesis and the male phenotype.

    PubMed

    Waterman, M R; Keeney, D S

    1992-01-01

    A series of enzymatic steps in the testis lead to the conversion of cholesterol to the male sex steroid hormones, testosterone and 5 alpha-dihydrotestosterone. Mutations in any one of these steps are presumed to alter or block the development of the male phenotype. Most of the genes encoding the enzymes involved in this pathway have now been cloned, and mutations within the coding regions of these genes do, in fact, block development of the male phenotype.

  16. Novel variants of the 5S rRNA genes in Eruca sativa.

    PubMed

    Singh, K; Bhatia, S; Lakshmikumaran, M

    1994-02-01

    The 5S ribosomal RNA (rRNA) genes of Eruca sativa were cloned and characterized. They are organized into clusters of tandemly repeated units. Each repeat unit consists of a 119-bp coding region followed by a noncoding spacer region that separates it from the coding region of the next repeat unit. Our study reports novel gene variants of the 5S rRNA genes in plants. Two families of the 5S rDNA, the 0.5-kb size family and the 1-kb size family, coexist in the E. sativa genome. The 0.5-kb size family consists of the 5S rRNA genes (S4) that have coding regions similar to those of other reported plant 5S rDNA sequences, whereas the 1-kb size family consists of the 5S rRNA gene variants (S1) that exist as 1-kb BamHI tandem repeats. S1 is made up of two variant units (V1 and V2) of 5S rDNA where the BamHI site between the two units is mutated. Sequence heterogeneity among S4, V1, and V2 units exists throughout the sequence and is not limited to the noncoding spacer region only. The coding regions of V1 and V2 show approximately 20% dissimilarity to the coding regions of S4 and other reported plant 5S rDNA sequences. Such a large variation in the coding regions of the 5S rDNA units within the same plant species has been observed for the first time. Restriction site variation is observed between the two size classes of 5S rDNA in E. sativa.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1982-11-01

    D- R136 495 RETURN DIFFERENCE FEEDBACK DESIGN FOR ROBUSTj/ UNCERTAINTY TOLERANCE IN STO..(U) UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES DEPT OF...State and ZIP Code) 7. b6 ADORESS (City. Staft and ZIP Code) Department of Electrical Engineering -’M Directorate of Mathematical & Information Systems ...13. SUBJECT TERMS Continur on rverse ineeesaty and identify by block nmber) FIELD GROUP SUE. GR. Systems theory; control; feedback; automatic control

  18. COM-GEOM Interactive Display Debugger (CIDD)

    DTIC Science & Technology

    1984-08-01

    necessery and Identify by block nlum.ber) Target Description GIFT interactive Computer Graphics SolIi d Geone t ry Combintatorial Gecometry * COM-GLOM 120...program was written to speed up the process of formulating the Com-Geom data used by the Geometric Information for Targets ( GIFT ) 1,2 computer code...Polyhedron Lawrence W. Bain, Mathew J. Reisinger, "The GIFT Code User Manual; Volume I, Introduction and Input Requirements (u)," BRL Report No. 1802

  19. NTRFACE for MAGIC

    DTIC Science & Technology

    1989-07-31

    40. NO NO ACCESSION NO N7 ?I TITLE (inWijuod Security Claisification) NTRFACE FOR MAGIC 𔃼 PERSONAL AUTHOR(S) N.T. GLADD PE OF REPORT T b TIME...the MAGIC Particle-in-Cell Simulation Code. 19 ABSTRACT (Contianue on reverse if nceary and d ntiy by block number) The NTRFACE system was developed...made concret by applying it to a specific application- a mature, highly complex plasma physics particle in cell simulation code name MAGIC . This

  20. Rapid Trust Establishment for Transient Use of Unmanaged Hardware

    DTIC Science & Technology

    2006-12-01

    unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Establishing...Validate OS Trusted Host OS (From Disk) Validate App 1 Untrusted code Trusted code (a) Boot with trust initiator ( b ) Boot trusted Host OS (c) Launch...be validated. Execution of process with Id 3535 has been blocked to minimize security risks. ( b ) Notification to the user from the trust alerter

  1. ARES: A System for Real-Time Operational and Tactical Decision Support

    DTIC Science & Technology

    1986-12-01

    In B]LE LCLGf. 9 NAVAL POSTGRADUATE SCHOOL Monterey, California Vi,-. %*.. THESIS - ’ A RE S A SYSTEM -OR REAL- 1I I .-.. --- OPERATIONAL AND...able) aval Postgraduate School 54 Naval Postgraduate School NN DRESS (City,. State,. and ZIP Code) 7b ADDRESS (City,. State,. and ZIP Code...SUBJECT TERMS (Continue on reverse if necessaty and identify by block number) LD GROUP SUB-GROUP Decision Support System, Logistics Model, Operational

  2. The Role of the National Training Center during Full Mobilization

    DTIC Science & Technology

    1991-06-07

    resources are proposed by this study. 14. SUBJECT TERMS 15. NUMBER OF PAGES 217 National Training Center (NTC); Training; Mobilization; Combat 16. PRICE ... Price Code, Enter appropriate price Block 8. Performina Oraanization Report code (NTIS only). Number, Enter the unique alphanumeric report number(s...Regular Army and a transfer of their roles to the Reserve Component. The end of the Cold War makes future mobilization needs less likely and argues for

  3. The predictive value of self-report questions in a clinical decision rule for pediatric lead poisoning screening.

    PubMed

    Kaplowitz, Stan A; Perlstadt, Harry; D'Onofrio, Gail; Melnick, Edward R; Baum, Carl R; Kirrane, Barbara M; Post, Lori A

    2012-01-01

    We derived a clinical decision rule for determining which young children need testing for lead poisoning. We developed an equation that combines lead exposure self-report questions with the child's census-block housing and socioeconomic characteristics, personal demographic characteristics, and Medicaid status. This equation better predicts elevated blood lead level (EBLL) than one using ZIP code and Medicaid status. A survey regarding potential lead exposure was administered from October 2001 to January 2003 to Michigan parents at pediatric clinics (n=3,396). These self-report survey data were linked to a statewide clinical registry of blood lead level (BLL) tests. Sensitivity and specificity were calculated and then used to estimate the cost-effectiveness of the equation. The census-block group prediction equation explained 18.1% of the variance in BLLs. Replacing block group characteristics with the self-report questions and dichotomized ZIP code risk explained only 12.6% of the variance. Adding three self-report questions to the census-block group model increased the variance explained to 19.9% and increased specificity with no loss in sensitivity in detecting EBLLs of ≥ 10 micrograms per deciliter. Relying solely on self-reports of lead exposure predicted BLL less effectively than the block group model. However, adding three of 13 self-report questions to our clinical decision rule significantly improved prediction of which children require a BLL test. Using the equation as the clinical decision rule would annually eliminate more than 7,200 unnecessary tests in Michigan and save more than $220,000.

  4. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  5. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  6. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  7. 75 FR 22165 - Request for Certification of Compliance-Rural Industrialization Loan and Grant Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-27

    ...-fit an existing manufacturing facility to produce autoclaved aerated concrete (AAC) ``green'' building materials. The NAICS industry code for this enterprise is: 327331 Concrete Block and Brick Manufacturing...

  8. A purified truncated form of yeast Gal4 expressed in Escherichia coli and used to functionalize poly(lactic acid) nanoparticle surface is transcriptionally active in cellulo.

    PubMed

    Legaz, Sophie; Exposito, Jean-Yves; Borel, Agnès; Candusso, Marie-Pierre; Megy, Simon; Montserret, Roland; Lahaye, Vincent; Terzian, Christophe; Verrier, Bernard

    2015-09-01

    Gal4/UAS system is a powerful tool for the analysis of numerous biological processes. Gal4 is a large yeast transcription factor that activates genes including UAS sequences in their promoter. Here, we have synthesized a minimal form of Gal4 DNA sequence coding for the binding and dimerization regions, but also part of the transcriptional activation domain. This truncated Gal4 protein was expressed as inclusion bodies in Escherichia coli. A structured and active form of this recombinant protein was purified and used to cover poly(lactic acid) (PLA) nanoparticles. In cellulo, these Gal4-vehicles were able to activate the expression of a Green Fluorescent Protein (GFP) gene under the control of UAS sequences, demonstrating that the decorated Gal4 variant can be delivery into cells where it still retains its transcription factor capacities. Thus, we have produced in E. coli and purified a short active form of Gal4 that retains its functions at the surface of PLA-nanoparticles in cellular assay. These decorated Gal4-nanoparticles will be useful to decipher their tissue distribution and their potential after ingestion or injection in UAS-GFP recombinant animal models. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Reengineering a transmembrane protein to treat muscular dystrophy using exon skipping.

    PubMed

    Gao, Quan Q; Wyatt, Eugene; Goldstein, Jeff A; LoPresti, Peter; Castillo, Lisa M; Gazda, Alec; Petrossian, Natalie; Earley, Judy U; Hadhazy, Michele; Barefield, David Y; Demonbreun, Alexis R; Bönnemann, Carsten; Wolf, Matthew; McNally, Elizabeth M

    2015-11-02

    Exon skipping uses antisense oligonucleotides as a treatment for genetic diseases. The antisense oligonucleotides used for exon skipping are designed to bypass premature stop codons in the target RNA and restore reading frame disruption. Exon skipping is currently being tested in humans with dystrophin gene mutations who have Duchenne muscular dystrophy. For Duchenne muscular dystrophy, the rationale for exon skipping derived from observations in patients with naturally occurring dystrophin gene mutations that generated internally deleted but partially functional dystrophin proteins. We have now expanded the potential for exon skipping by testing whether an internal, in-frame truncation of a transmembrane protein γ-sarcoglycan is functional. We generated an internally truncated γ-sarcoglycan protein that we have termed Mini-Gamma by deleting a large portion of the extracellular domain. Mini-Gamma provided functional and pathological benefits to correct the loss of γ-sarcoglycan in a Drosophila model, in heterologous cell expression studies, and in transgenic mice lacking γ-sarcoglycan. We generated a cellular model of human muscle disease and showed that multiple exon skipping could be induced in RNA that encodes a mutant human γ-sarcoglycan. Since Mini-Gamma represents removal of 4 of the 7 coding exons in γ-sarcoglycan, this approach provides a viable strategy to treat the majority of patients with γ-sarcoglycan gene mutations.

  10. Termination and read-through proteins encoded by genome segment 9 of Colorado tick fever virus.

    PubMed

    Mohd Jaafar, Fauziah; Attoui, Houssam; De Micco, Philippe; De Lamballerie, Xavier

    2004-08-01

    Genome segment 9 (Seg-9) of Colorado tick fever virus (CTFV) is 1884 bp long and contains a large open reading frame (ORF; 1845 nt in length overall), although a single in-frame stop codon (at nt 1052-1054) reduces the ORF coding capacity by approximately 40 %. However, analyses of highly conserved RNA sequences in the vicinity of the stop codon indicate that it belongs to a class of 'leaky terminators'. The third nucleotide positions in codons situated both before and after the stop codon, shows the highest variability, suggesting that both regions are translated during virus replication. This also suggests that the stop signal is functionally leaky, allowing read-through translation to occur. Indeed, both the truncated 'termination' protein and the full-length 'read-through' protein (VP9 and VP9', respectively) were detected in CTFV-infected cells, in cells transfected with a plasmid expressing only Seg-9 protein products, and in the in vitro translation products from undenatured Seg-9 ssRNA. The ratios of full-length and truncated proteins generated suggest that read-through may be down-regulated by other viral proteins. Western blot analysis of infected cells and purified CTFV showed that VP9 is a structural component of the virion, while VP9' is a non-structural protein.

  11. Reengineering a transmembrane protein to treat muscular dystrophy using exon skipping

    PubMed Central

    Gao, Quan Q.; Wyatt, Eugene; Goldstein, Jeff A.; LoPresti, Peter; Castillo, Lisa M.; Gazda, Alec; Petrossian, Natalie; Earley, Judy U.; Hadhazy, Michele; Barefield, David Y.; Demonbreun, Alexis R.; Bönnemann, Carsten; Wolf, Matthew; McNally, Elizabeth M.

    2015-01-01

    Exon skipping uses antisense oligonucleotides as a treatment for genetic diseases. The antisense oligonucleotides used for exon skipping are designed to bypass premature stop codons in the target RNA and restore reading frame disruption. Exon skipping is currently being tested in humans with dystrophin gene mutations who have Duchenne muscular dystrophy. For Duchenne muscular dystrophy, the rationale for exon skipping derived from observations in patients with naturally occurring dystrophin gene mutations that generated internally deleted but partially functional dystrophin proteins. We have now expanded the potential for exon skipping by testing whether an internal, in-frame truncation of a transmembrane protein γ-sarcoglycan is functional. We generated an internally truncated γ-sarcoglycan protein that we have termed Mini-Gamma by deleting a large portion of the extracellular domain. Mini-Gamma provided functional and pathological benefits to correct the loss of γ-sarcoglycan in a Drosophila model, in heterologous cell expression studies, and in transgenic mice lacking γ-sarcoglycan. We generated a cellular model of human muscle disease and showed that multiple exon skipping could be induced in RNA that encodes a mutant human γ-sarcoglycan. Since Mini-Gamma represents removal of 4 of the 7 coding exons in γ-sarcoglycan, this approach provides a viable strategy to treat the majority of patients with γ-sarcoglycan gene mutations. PMID:26457733

  12. Phenotypic spectrum associated with PTCHD1 deletions and truncating mutations includes intellectual disability and autism spectrum disorder.

    PubMed

    Chaudhry, A; Noor, A; Degagne, B; Baker, K; Bok, L A; Brady, A F; Chitayat, D; Chung, B H; Cytrynbaum, C; Dyment, D; Filges, I; Helm, B; Hutchison, H T; Jeng, L J B; Laumonnier, F; Marshall, C R; Menzel, M; Parkash, S; Parker, M J; Raymond, L F; Rideout, A L; Roberts, W; Rupps, R; Schanze, I; Schrander-Stumpel, C T R M; Speevak, M D; Stavropoulos, D J; Stevens, S J C; Thomas, E R A; Toutain, A; Vergano, S; Weksberg, R; Scherer, S W; Vincent, J B; Carter, M T

    2015-09-01

    Studies of genomic copy number variants (CNVs) have identified genes associated with autism spectrum disorder (ASD) and intellectual disability (ID) such as NRXN1, SHANK2, SHANK3 and PTCHD1. Deletions have been reported in PTCHD1 however there has been little information available regarding the clinical presentation of these individuals. Herein we present 23 individuals with PTCHD1 deletions or truncating mutations with detailed phenotypic descriptions. The results suggest that individuals with disruption of the PTCHD1 coding region may have subtle dysmorphic features including a long face, prominent forehead, puffy eyelids and a thin upper lip. They do not have a consistent pattern of associated congenital anomalies or growth abnormalities. They have mild to moderate global developmental delay, variable degrees of ID, and many have prominent behavioral issues. Over 40% of subjects have ASD or ASD-like behaviors. The only consistent neurological findings in our cohort are orofacial hypotonia and mild motor incoordination. Our findings suggest that hemizygous PTCHD1 loss of function causes an X-linked neurodevelopmental disorder with a strong propensity to autistic behaviors. Detailed neuropsychological studies are required to better define the cognitive and behavioral phenotype. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Efficient analysis of mouse genome sequences reveal many nonsense variants

    PubMed Central

    Steeland, Sophie; Timmermans, Steven; Van Ryckeghem, Sara; Hulpiau, Paco; Saeys, Yvan; Van Montagu, Marc; Vandenbroucke, Roosmarijn E.; Libert, Claude

    2016-01-01

    Genetic polymorphisms in coding genes play an important role when using mouse inbred strains as research models. They have been shown to influence research results, explain phenotypical differences between inbred strains, and increase the amount of interesting gene variants present in the many available inbred lines. SPRET/Ei is an inbred strain derived from Mus spretus that has ∼1% sequence difference with the C57BL/6J reference genome. We obtained a listing of all SNPs and insertions/deletions (indels) present in SPRET/Ei from the Mouse Genomes Project (Wellcome Trust Sanger Institute) and processed these data to obtain an overview of all transcripts having nonsynonymous coding sequence variants. We identified 8,883 unique variants affecting 10,096 different transcripts from 6,328 protein-coding genes, which is about 28% of all coding genes. Because only a subset of these variants results in drastic changes in proteins, we focused on variations that are nonsense mutations that ultimately resulted in a gain of a stop codon. These genes were identified by in silico changing the C57BL/6J coding sequences to the SPRET/Ei sequences, converting them to amino acid (AA) sequences, and comparing the AA sequences. All variants and transcripts affected were also stored in a database, which can be browsed using a SPRET/Ei M. spretus variants web tool (www.spretus.org), including a manual. We validated the tool by demonstrating the loss of function of three proteins predicted to be severely truncated, namely Fas, IRAK2, and IFNγR1. PMID:27147605

  14. Chromosomal localization and sequence analysis of a human episomal sequence with in vitro differentiating activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boccaccio, C.; Deshatrette, J.; Meunier-Rotival, M.

    1994-05-01

    The genomic fragment carrying the human activator of liver function, previously described as an episome capable of inducing differentiation upon transfection into a dedifferentiated rat hepatoma cell line, was mapped on human chromosome 12q24.2-12q24.3. This chromosomal location was indistinguishable by in situ hybridization from that of the gene coding for the hepatic transcription factor HNF1. The sequence of the integrated form of the episome as well as its flanking sequences show that it is rich in retroposons. It contains a human ribosomal protein L21 processed pseudogene, one truncated L1Hs sequence, and 10 Alu repeats, which belong to different subfamilies.

  15. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  16. National Centers for Environmental Prediction

    Science.gov Websites

    resolution at T574 becomes ~ 23 km T382 Spectral truncation equivalent to horizontal resolution ~37 km T254 Spectral truncation equivalent to horizontal resolution ~50-55 km T190 Spectral truncation equivalent to horizontal resolution ~70 km T126 Spectral truncation equivalent to horizontal resolution ~100 km UM Unified

  17. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  18. Impact of degree truncation on the spread of a contagious process on networks.

    PubMed

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  19. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  20. Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields

    NASA Astrophysics Data System (ADS)

    Motazedi, Mohammad Reza; Dianat, Reza

    2017-02-01

    In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.

  1. Design, Construction and Cloning of Truncated ORF2 and tPAsp-PADRE-Truncated ORF2 Gene Cassette From Hepatitis E Virus in the pVAX1 Expression Vector

    PubMed Central

    Farshadpour, Fatemeh; Makvandi, Manoochehr; Taherkhani, Reza

    2015-01-01

    Background: Hepatitis E Virus (HEV) is the causative agent of enterically transmitted acute hepatitis and has high mortality rate of up to 30% among pregnant women. Therefore, development of a novel vaccine is a desirable goal. Objectives: The aim of this study was to construct tPAsp-PADRE-truncated open reading frame 2 (ORF2) and truncated ORF2 DNA plasmid, which can assist future studies with the preparation of an effective vaccine against Hepatitis E Virus. Materials and Methods: A synthetic codon-optimized gene cassette encoding tPAsp-PADRE-truncated ORF2 protein was designed, constructed and analyzed by some bioinformatics software. Furthermore, a codon-optimized truncated ORF2 gene was amplified by the polymerase chain reaction (PCR), with a specific primer from the previous construct. The constructs were sub-cloned in the pVAX1 expression vector and finally expressed in eukaryotic cells. Results: Sequence analysis and bioinformatics studies of the codon-optimized gene cassette revealed that codon adaptation index (CAI), GC content, and frequency of optimal codon usage (Fop) value were improved, and performance of the secretory signal was confirmed. Cloning and sub-cloning of the tPAsp-PADRE-truncated ORF2 gene cassette and truncated ORF2 gene were confirmed by colony PCR, restriction enzymes digestion and DNA sequencing of the recombinant plasmids pVAX-tPAsp-PADRE-truncated ORF2 (aa 112-660) and pVAX-truncated ORF2 (aa 112-660). The expression of truncated ORF2 protein in eukaryotic cells was approved by an Immunofluorescence assay (IFA) and the reverse transcriptase polymerase chain reaction (RT-PCR) method. Conclusions: The results of this study demonstrated that the tPAsp-PADRE-truncated ORF2 gene cassette and the truncated ORF2 gene in recombinant plasmids are successfully expressed in eukaryotic cells. The immunogenicity of the two recombinant plasmids with different formulations will be evaluated as a novel DNA vaccine in future investigations. PMID:26865938

  2. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  3. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  4. Long non-coding RNA CRYBG3 blocks cytokinesis by directly binding G-actin.

    PubMed

    Pei, Hailong; Hu, Wentao; Guo, Ziyang; Chen, Huaiyuan; Ma, Ji; Mao, Weidong; Li, Bingyan; Wang, Aiqing; Wan, Jianmei; Zhang, Jian; Nie, Jing; Zhou, Guangming; Hei, Tom K

    2018-06-22

    The dynamic interchange between monomeric globular actin (G-actin) and polymeric filamentous actin filaments (F-actin) is fundamental and essential to many cellular processes including cytokinesis and maintenance of genomic stability. Here we report that the long non-coding RNA LNC CRYBG3 directly binds G-actin to inhibit its polymerization and formation of contractile rings, resulting in M-Phase cell arrest. Knockdown of LNC CRYBG3 in tumor cells enhanced their malignant phenotypes. Nucleotide sequence 228-237 of the full-length LNC CRYBG3 and the ser14 domain of beta-actin are essential for their interaction, and mutation of either of these sites abrogated binding of LNC CRYBG3 to G-actin. Binding of LNC CRYBG3 to G-actin blocked nuclear localization of MAL, which consequently kept serum response factor (SRF) away from the promoter region of several immediate early genes, including JUNB and Arp3, which are necessary for cellular proliferation, tumor growth, adhesion, movement, and metastasis. These findings reveal a novel lncRNA-actin-MAL-SRF pathway and highlight LNC CRYBG3 as a means to block cytokinesis and treat cancer by targeting the actin cytoskeleton. Copyright ©2018, American Association for Cancer Research.

  5. Thin-layer and full Navier-Stokes calculations for turbulent supersonic flow over a cone at an angle of attack

    NASA Technical Reports Server (NTRS)

    Smith, Crawford F.; Podleski, Steve D.

    1993-01-01

    The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.

  6. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  7. Sub-block motion derivation for merge mode in HEVC

    NASA Astrophysics Data System (ADS)

    Chien, Wei-Jung; Chen, Ying; Chen, Jianle; Zhang, Li; Karczewicz, Marta; Li, Xiang

    2016-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. In this paper, two additional merge candidates, advanced temporal motion vector predictor and spatial-temporal motion vector predictor, are developed to improve motion information prediction scheme under the HEVC structure. The proposed method allows each Prediction Unit (PU) to fetch multiple sets of motion information from multiple blocks smaller than the current PU. By splitting a large PU into sub-PUs and filling motion information for all the sub-PUs of the large PU, signaling cost of motion information could be reduced. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. Simulation results show that 2.4% performance improvement over HEVC can be achieved.

  8. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  9. Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures

    DOE PAGES

    Fung, J.; Aulwes, R. T.; Bement, M. T.; ...

    2015-07-14

    This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less

  10. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  11. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  12. Numerical Solution of the Three-Dimensional Navier-Stokes Equation.

    DTIC Science & Technology

    1982-03-01

    compressible, viscous fluid in an arbitrary geometry. We wish to use a grid generating scheme so we assume that the geometry of the physical problem given in...bian J of the mapping are provided. (For work on grid generating schemes see [4], [5] or [6).) Hence we must solve the following system of equations...these limitations the data structure used in the ILLIAC code is to partition the grid into 8 x 8 x 8 blocks. A row of these blocks in a given

  13. Beer Drinking Games: Categories, Level of Risk, and their Correlation with Sensation Seeking

    DTIC Science & Technology

    1994-07-01

    Maximum 200 words) moo, AU 1F2 1994 . . F 14 . SUBJECT TERMS 15. NUMBER OF PAGES 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION ig...availability or limitations. Cite any Block 2. Report Date. Full publication date availability to the public. Enter additional including day, month, and year ...if available (e.g. 1 limitations or special markings in all capitals (e.g. Jan 88). Must cite at least the year . NOFORN, REL, ITAR). Block 3. Type of

  14. Proposed Standard For Variable Format Picture Processing And A Codec Approach To Match Diverse Imaging Devices

    NASA Astrophysics Data System (ADS)

    Wendler, Th.; Meyer-Ebrecht, D.

    1982-01-01

    Picture archiving and communication systems, especially those for medical applications, will offer the potential to integrate the various image sources of different nature. A major problem, however, is the incompatibility of the different matrix sizes and data formats. This may be overcome by a novel hierarchical coding process, which could lead to a unified picture format standard. A picture coding scheme is described, which decomposites a given (2n)2 picture matrix into a basic (2m)2 coarse information matrix (representing lower spatial frequencies) and a set of n-m detail matrices, containing information of increasing spatial resolution. Thus, the picture is described by an ordered set of data blocks rather than by a full resolution matrix of pixels. The blocks of data are transferred and stored using data formats, which have to be standardized throughout the system. Picture sources, which produce pictures of different resolution, will provide the coarse-matrix datablock and additionally only those detail matrices that correspond to their required resolution. Correspondingly, only those detail-matrix blocks need to be retrieved from the picture base, that are actually required for softcopy or hardcopy output. Thus, picture sources and retrieval terminals of diverse nature and retrieval processes for diverse purposes are easily made compatible. Furthermore this approach will yield an economic use of storage space and transmission capacity: In contrast to fixed formats, redundand data blocks are always skipped. The user will get a coarse representation even of a high-resolution picture almost instantaneously with gradually added details, and may abort transmission at any desired detail level. The coding scheme applies the S-transform, which is a simple add/substract algorithm basically derived from the Hadamard Transform. Thus, an additional data compression can easily be achieved especially for high-resolution pictures by applying appropriate non-linear and/or adaptive quantizing.

  15. Preprocessor that Enables the Use of GridProTM Grids for Unsteady Reynolds-Averaged Navier-Stokes Code TURBO

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram

    2010-01-01

    A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.

  16. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  17. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  18. Computing correct truncated excited state wavefunctions

    NASA Astrophysics Data System (ADS)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  19. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  20. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

Top