Sample records for variable length code

  1. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  2. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  3. An adaptable binary entropy coder

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  4. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  5. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  6. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  7. Variable weight spectral amplitude coding for multiservice OCDMA networks

    NASA Astrophysics Data System (ADS)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  8. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  9. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  10. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  11. Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.

    PubMed

    Honkela, Antti; Valpola, Harri

    2004-07-01

    The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.

  12. Spectral analysis of variable-length coded digital signals

    NASA Astrophysics Data System (ADS)

    Cariolaro, G. L.; Pierobon, G. L.; Pupolin, S. G.

    1982-05-01

    A spectral analysis is conducted for a variable-length word sequence by an encoder driven by a stationary memoryless source. A finite-state sequential machine is considered as a model of the line encoder, and the spectral analysis of the encoded message is performed under the assumption that the sourceword sequence is composed of independent identically distributed words. Closed form expressions for both the continuous and discrete parts of the spectral density are derived in terms of the encoder law and sourceword statistics. The jump part exhibits jumps at multiple integers of per lambda(sub 0)T, where lambda(sub 0) is the greatest common divisor of the possible codeword lengths, and T is the symbol period. The derivation of the continuous part can be conveniently factorized, and the theory is applied to the spectral analysis of BnZS and HDBn codes.

  13. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  14. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  15. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    NASA Astrophysics Data System (ADS)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  16. Evaluation of Variable-Depth Liner Configurations for Increased Broadband Noise Reduction

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Howerton, B. M.

    2015-01-01

    This paper explores the effects of variable-depth geometry on the amount of noise reduction that can be achieved with acoustic liners. Results for two variable-depth liners tested in the NASA Langley Grazing Flow Impedance Tube demonstrate significant broadband noise reduction. An impedance prediction model is combined with two propagation codes to predict corresponding sound pressure level profiles over the length of the Grazing Flow Impedance Tube. The comparison of measured and predicted sound pressure level profiles is sufficiently favorable to support use of these tools for investigation of a number of proposed variable-depth liner configurations. Predicted sound pressure level profiles for these proposed configurations reveal a number of interesting features. Liner orientation clearly affects the sound pressure level profile over the length of the liner, but the effect on the total attenuation is less pronounced. The axial extent of attenuation at an individual frequency continues well beyond the location where the liner depth is optimally tuned to the quarter-wavelength of that frequency. The sound pressure level profile is significantly affected by the way in which variable-depth segments are distributed over the length of the liner. Given the broadband noise reduction capability for these liner configurations, further development of impedance prediction models and propagation codes specifically tuned for this application is warranted.

  17. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    PubMed

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by far the most variable segment. Further analyses involving the binding of transcription factors and non-coding RNAs, as well as the HLA-E expression in different tissues, are necessary to evaluate whether these variable sites at regulatory segments (or even at the coding sequence) may influence the gene expression profile. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Federal Logistics Information Systems. FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Variable Length). Volume 9.

    DTIC Science & Technology

    1997-04-01

    DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89

  19. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  20. Complete mitochondrial genome of Xingguo red carp (Cyprinus carpio var. singuonensis) and purse red carp (Cyprinus carpio var. wuyuanensis).

    PubMed

    Hu, Guang-Fu; Liu, Xiang-Jiang; Li, Zhong; Liang, Hong-Wei; Hu, Shao-Na; Zou, Gui-Wei

    2016-01-01

    The complete mitochondrial genomes of Xingguo red carp (Cyprinus carpio var. singuonensis) and purse red carp (Cyprinus carpio var. wuyuanensis) were sequenced. Comparison of these two mitochondrial genomes revealed that the mtDNAs of these two common carp varieties were remarkably similar in genome length, gene order and content, and AT content. However, size variation between these two mitochondrial genomes presented here showed 39 site differences in overall length. About 2 site differences were located in rRNAs, 3 in tRNAs, 3 in the control region, 31 in protein-coding genes. Thirty-one variable bases in the protein-coding regions between the two varieties mitochondrial sequences led to three variable amino acids, which were mainly located in the protein ND5 and ND4.

  1. 28-Bit serial word simulator/monitor

    NASA Technical Reports Server (NTRS)

    Durbin, J. W.

    1979-01-01

    Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.

  2. Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes

    NASA Astrophysics Data System (ADS)

    Tavallaei, Saeed Ebadi; Falahati, Abolfazl

    Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.

  3. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    PubMed

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  4. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  5. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  6. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).

  7. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  8. Improving the diagnosis related grouping model's ability to explain length of stay of elderly medical inpatients by incorporating function-linked variables.

    PubMed

    Sahadevan, S; Earnest, A; Koh, Y L; Lee, K M; Soh, C H; Ding, Y Y

    2004-09-01

    This study first aimed to determine the adequacy of the Diagnosis Related Grouping (DRG) model's ability to explain (1) the variance in the actual length of stay (LOS) of elderly medical inpatients and (2) the LOS difference in the same cohort between the departments of Geriatric Medicine (GRM) and General Medicine (GM). We then looked at how these explanatory abilities of the DRG changed when patients' function-linked variables (ignored by DRG) were incorporated into the model. Basic demographic data of a consecutively hospitalised cohort of elderly medical inpatients from GRM and GM, as well as their actual LOS, discharge DRG codes [with their corresponding trimmed average length of stay (ALOS)] and selected function-linked variables (including premorbid functional status, change in functional profile during hospitalisation and number of therapists seen) were recorded. Beginning with ALOS, function-linked variables that were significantly associated with LOS were then added into two multiple liner regression models so as to quantify how the functional dimension improved the DRGs' abilities to explain LOS variances and interdepartmental LOS differences. Forward selection procedure was employed to determine the final models. For the interdepartmental analysis, the study sample was restricted to patients who shared common DRG codes. 114 GRM and 118 GM patients were studied. Trimmed ALOS alone explained 8% of the actual LOS variance. With the addition of function-linked variables, the adjusted R2 of the final model increased to 28%. Due to common code restrictions, the data of 79 GRM and 78 GM patients were available for the analysis of interdepartmental LOS differences. At the unadjusted stage, the median stay of GRM patients was 4.3 days longer than GM's and with adjustments made for the DRGs, this difference was reduced to 3.9 days. Additionally adjusting for the patients' functional features diminished the interdepartmental LOS discrepancy even further, to 2.1 days. This study demonstrates that for elderly medical inpatients, the incorporation of patients' functional status significantly improves the DRG model's ability to predict the patients' actual LOS as well as to explain interdepartmental LOS differences between GRM and GM.

  9. Analysis of variable sites between two complete South China tiger (Panthera tigris amoyensis) mitochondrial genomes.

    PubMed

    Zhang, Wenping; Yue, Bisong; Wang, Xiaofang; Zhang, Xiuyue; Xie, Zhong; Liu, Nonglin; Fu, Wenyuan; Yuan, Yaohua; Chen, Daqing; Fu, Danghua; Zhao, Bo; Yin, Yuzhong; Yan, Xiahui; Wang, Xinjing; Zhang, Rongying; Liu, Jie; Li, Maoping; Tang, Yao; Hou, Rong; Zhang, Zhihe

    2011-10-01

    In order to investigate the mitochondrial genome of Panthera tigris amoyensis, two South China tigers (P25 and P27) were analyzed following 15 cymt-specific primer sets. The entire mtDNA sequence was found to be 16,957 bp and 17,001 bp long for P25 and P27 respectively, and this difference in length between P25 and P27 occurred in the number of tandem repeats in the RS-3 segment of the control region. The structural characteristics of complete P. t. amoyensis mitochondrial genomes were also highly similar to those of P. uncia. Additionally, the rate of point mutation was only 0.3% and a total of 59 variable sites between P25 and P27 were found. Out of the 59 variable sites, 6 were located in 6 different tRNA genes, 6 in the 2 rRNA genes, 7 in non-coding regions (one located between tRNA-Asn and tRNA-Tyr and six in the D-loop), and 40 in 10 protein-coding genes. COI held the largest amount of variable sites (9 sites) and Cytb contained the highest variable rate (0.7%) in the complete sequences. Moreover, out of the 40 variable sites located in 10 protein-coding genes, 12 sites were nonsynonymous.

  10. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  11. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  12. Neural correlates of word production stages delineated by parametric modulation of psycholinguistic variables.

    PubMed

    Wilson, Stephen M; Isenberg, Anna Lisette; Hickok, Gregory

    2009-11-01

    Word production is a complex multistage process linking conceptual representations, lexical entries, phonological forms and articulation. Previous studies have revealed a network of predominantly left-lateralized brain regions supporting this process, but many details regarding the precise functions of different nodes in this network remain unclear. To better delineate the functions of regions involved in word production, we used event-related functional magnetic resonance imaging (fMRI) to identify brain areas where blood oxygen level-dependent (BOLD) responses to overt picture naming were modulated by three psycholinguistic variables: concept familiarity, word frequency, and word length, and one behavioral variable: reaction time. Each of these variables has been suggested by prior studies to be associated with different aspects of word production. Processing of less familiar concepts was associated with greater BOLD responses in bilateral occipitotemporal regions, reflecting visual processing and conceptual preparation. Lower frequency words produced greater BOLD signal in left inferior temporal cortex and the left temporoparietal junction, suggesting involvement of these regions in lexical selection and retrieval and encoding of phonological codes. Word length was positively correlated with signal intensity in Heschl's gyrus bilaterally, extending into the mid-superior temporal gyrus (STG) and sulcus (STS) in the left hemisphere. The left mid-STS site was also modulated by reaction time, suggesting a role in the storage of lexical phonological codes.

  13. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  14. Variability and transmission by Aphis glycines of North American and Asian Soybean mosaic virus isolates.

    PubMed

    Domier, L L; Latorre, I J; Steinlage, T A; McCoppin, N; Hartman, G L

    2003-10-01

    The variability of North American and Asian strains and isolates of Soybean mosaic virus was investigated. First, polymerase chain reaction (PCR) products representing the coat protein (CP)-coding regions of 38 SMVs were analyzed for restriction fragment length polymorphisms (RFLP). Second, the nucleotide and predicted amino acid sequence variability of the P1-coding region of 18 SMVs and the helper component/protease (HC/Pro) and CP-coding regions of 25 SMVs were assessed. The CP nucleotide and predicted amino acid sequences were the most similar and predicted phylogenetic relationships similar to those obtained from RFLP analysis. Neither RFLP nor sequence analyses of the CP-coding regions grouped the SMVs by geographical origin. The P1 and HC/Pro sequences were more variable and separated the North American and Asian SMV isolates into two groups similar to previously reported differences in pathogenic diversity of the two sets of SMV isolates. The P1 region was the most informative of the three regions analyzed. To assess the biological relevance of the sequence differences in the HC/Pro and CP coding regions, the transmissibility of 14 SMV isolates by Aphis glycines was tested. All field isolates of SMV were transmitted efficiently by A. glycines, but the laboratory isolates analyzed were transmitted poorly. The amino acid sequences from most, but not all, of the poorly transmitted isolates contained mutations in the aphid transmission-associated DAG and/or KLSC amino acid sequence motifs of CP and HC/Pro, respectively.

  15. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  16. Successful Recovery of Nuclear Protein-Coding Genes from Small Insects in Museums Using Illumina Sequencing.

    PubMed

    Kanda, Kojun; Pflug, James M; Sproul, John S; Dasenko, Mark A; Maddison, David R

    2015-01-01

    In this paper we explore high-throughput Illumina sequencing of nuclear protein-coding, ribosomal, and mitochondrial genes in small, dried insects stored in natural history collections. We sequenced one tenebrionid beetle and 12 carabid beetles ranging in size from 3.7 to 9.7 mm in length that have been stored in various museums for 4 to 84 years. Although we chose a number of old, small specimens for which we expected low sequence recovery, we successfully recovered at least some low-copy nuclear protein-coding genes from all specimens. For example, in one 56-year-old beetle, 4.4 mm in length, our de novo assembly recovered about 63% of approximately 41,900 nucleotides in a target suite of 67 nuclear protein-coding gene fragments, and 70% using a reference-based assembly. Even in the least successfully sequenced carabid specimen, reference-based assembly yielded fragments that were at least 50% of the target length for 34 of 67 nuclear protein-coding gene fragments. Exploration of alternative references for reference-based assembly revealed few signs of bias created by the reference. For all specimens we recovered almost complete copies of ribosomal and mitochondrial genes. We verified the general accuracy of the sequences through comparisons with sequences obtained from PCR and Sanger sequencing, including of conspecific, fresh specimens, and through phylogenetic analysis that tested the placement of sequences in predicted regions. A few possible inaccuracies in the sequences were detected, but these rarely affected the phylogenetic placement of the samples. Although our sample sizes are low, an exploratory regression study suggests that the dominant factor in predicting success at recovering nuclear protein-coding genes is a high number of Illumina reads, with success at PCR of COI and killing by immersion in ethanol being secondary factors; in analyses of only high-read samples, the primary significant explanatory variable was body length, with small beetles being more successfully sequenced.

  17. Successful Recovery of Nuclear Protein-Coding Genes from Small Insects in Museums Using Illumina Sequencing

    PubMed Central

    Dasenko, Mark A.

    2015-01-01

    In this paper we explore high-throughput Illumina sequencing of nuclear protein-coding, ribosomal, and mitochondrial genes in small, dried insects stored in natural history collections. We sequenced one tenebrionid beetle and 12 carabid beetles ranging in size from 3.7 to 9.7 mm in length that have been stored in various museums for 4 to 84 years. Although we chose a number of old, small specimens for which we expected low sequence recovery, we successfully recovered at least some low-copy nuclear protein-coding genes from all specimens. For example, in one 56-year-old beetle, 4.4 mm in length, our de novo assembly recovered about 63% of approximately 41,900 nucleotides in a target suite of 67 nuclear protein-coding gene fragments, and 70% using a reference-based assembly. Even in the least successfully sequenced carabid specimen, reference-based assembly yielded fragments that were at least 50% of the target length for 34 of 67 nuclear protein-coding gene fragments. Exploration of alternative references for reference-based assembly revealed few signs of bias created by the reference. For all specimens we recovered almost complete copies of ribosomal and mitochondrial genes. We verified the general accuracy of the sequences through comparisons with sequences obtained from PCR and Sanger sequencing, including of conspecific, fresh specimens, and through phylogenetic analysis that tested the placement of sequences in predicted regions. A few possible inaccuracies in the sequences were detected, but these rarely affected the phylogenetic placement of the samples. Although our sample sizes are low, an exploratory regression study suggests that the dominant factor in predicting success at recovering nuclear protein-coding genes is a high number of Illumina reads, with success at PCR of COI and killing by immersion in ethanol being secondary factors; in analyses of only high-read samples, the primary significant explanatory variable was body length, with small beetles being more successfully sequenced. PMID:26716693

  18. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    NASA Astrophysics Data System (ADS)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness <3% and RF coupling coefficient close to unity. The proposed design algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  19. HIV1 V3 loop hypermutability is enhanced by the guanine usage bias in the part of env gene coding for it.

    PubMed

    Khrustalev, Vladislav Victorovich

    2009-01-01

    Guanine is the most mutable nucleotide in HIV genes because of frequently occurring G to A transitions, which are caused by cytosine deamination in viral DNA minus strands catalyzed by APOBEC enzymes. Distribution of guanine between three codon positions should influence the probability for G to A mutation to be nonsynonymous (to occur in first or second codon position). We discovered that nucleotide sequences of env genes coding for third variable regions (V3 loops) of gp120 from HIV1 and HIV2 have different kinds of guanine usage biases. In the HIV1 reference strain and 100 additionally analyzed HIV1 strains the guanine usage bias in V3 loop coding regions (2G>1G>3G) should lead to elevated nonsynonymous G to A transitions occurrence rates. In the HIV2 reference strain and 100 other HIV2 strains guanine usage bias in V3 loop coding regions (3G>2G>1G) should protect V3 loops from hypermutability. According to the HIV1 and HIV2 V3 alignment, insertion of the sequence enriched with 2G (21 codons in length) occurred during the evolution of HIV1 predecessor, while insertion of the different sequence enriched with 3G (19 codons in length) occurred during the evolution of HIV2 predecessor. The higher is the level of 3G in the V3 coding region, the lower should be the immune escaping mutation occurrence rates. This hypothesis was tested in this study by comparing the guanine usage in V3 loop coding regions from HIV1 fast and slow progressors. All calculations have been performed by our algorithms "VVK In length", "VVK Dinucleotides" and "VVK Consensus" (www.barkovsky.hotmail.ru).

  20. 3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss

    NASA Astrophysics Data System (ADS)

    Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.

    2012-12-01

    3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.

  1. Complete mitochondrial genome of Yangtze River wild common carp (Cyprinus carpio haematopterus) and Russian scattered scale mirror carp (Cyprinus carpio carpio).

    PubMed

    Hu, Guang Fu; Liu, Xiang Jiang; Zou, Gui Wei; Li, Zhong; Liang, Hong-Wei; Hu, Shao-Na

    2016-01-01

    We sequenced the complete mitogenomes of (Cyprinus carpio haematopterus) and Russian scattered scale mirror carp (Cyprinus carpio carpio). Comparison of these two mitogenomes revealed that the mitogenomes of these two common carp strains were remarkably similar in genome length, gene order and content, and AT content. There were only 55 bp variations in 16,581 nucleotides. About 1 bp variation was located in rRNAs, 2 bp in tRNAs, 9 bp in the control region and 43 bp in protein-coding genes. Furthermore, forty-three variable nucleotides in the protein-coding genes of the two strains led to four variable amino acids, which were located in the ND2, ATPase 6, ND5 and ND6 genes, respectively.

  2. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  3. Code development for ships -- A demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyub, B.; Mansour, A.E.; White, G.

    1996-12-31

    A demonstration summary of a reliability-based structural design code for ships is presented for two ship types, a cruiser and a tanker. For both ship types, code requirements cover four failure modes: hull girder bulking, unstiffened plate yielding and buckling, stiffened plate buckling, and fatigue of critical detail. Both serviceability and ultimate limit states are considered. Because of limitation on the length, only hull girder modes are presented in this paper. Code requirements for other modes will be presented in future publication. A specific provision of the code will be a safety check expression. The design variables are to bemore » taken at their nominal values, typically values in the safe side of the respective distributions. Other safety check expressions for hull girder failure that include load combination factors, as well as consequence of failure factors, are considered. This paper provides a summary of safety check expressions for the hull girder modes.« less

  4. Factors Related to Outcome in a School-Based Intensive Mental Health Program: An Examination of Nonresponders

    ERIC Educational Resources Information Center

    Jacobs, Anne K.; Roberts, Michael C.; Vernberg, Eric M.; Nyre, Joseph E.; Randall, Camille J.; Puddy, Richard W.

    2008-01-01

    We examined factors related to treatment responders (n = 35) and nonresponders (n = 16) in a group of 51 children admitted to the Intensive Mental Health Program (IMHP). Children's response to treatment was coded based on their functioning at intake and discharge using total CAFAS scores. Demographic variables, length of treatment, number of…

  5. 2D hydrodynamic simulations of a variable length gas target for density down-ramp injection of electrons into a laser wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Kononenko, O.; Lopes, N. C.; Cole, J. M.; Kamperidis, C.; Mangles, S. P. D.; Najmudin, Z.; Osterhoff, J.; Poder, K.; Rusby, D.; Symes, D. R.; Warwick, J.; Wood, J. C.; Palmer, C. A. J.

    2016-09-01

    In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.

  6. [Main results of the Swiss study on DRGs (Casemix Study)].

    PubMed

    Casemix, E

    1989-01-01

    Sponsored by the Health Administrations of nine cantons, this study was conducted by the University Institute of Social and Preventive Medicine in Lausanne in order to assess how DRGs could be used within the Swiss context. A data base mainly provided by the Swiss VESKA statistics was used. The first step provided the transformation of Swiss diagnostic and intervention codes into US codes, allowing direct use of the Yale Grouper for DRG. The second step showed that the overall performance of DRG in terms of variability reduction of the length of stay was similar to the one observed in US; there are, however, problems when the homogeneity of medicotechnical procedures for DRG is considered. The third steps showed how DRG could be used as an account unit in hospital, and how costs per DRG could be estimated. Other examples of applications of DRG were examined, for example comparison of Casemix or length of stay between hospitals.

  7. Reconciliation of international administrative coding systems for comparison of colorectal surgery outcome.

    PubMed

    Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O

    2014-07-01

    Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  8. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  9. Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques.

    PubMed

    Fitch, W T

    1997-08-01

    Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies ("formants") for the same animals. A new acoustic variable is proposed, "formant dispersion," which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.

  10. Advanced Noise Control Fan: A 20-Year Retrospective

    NASA Technical Reports Server (NTRS)

    Sutliff, Dan

    2016-01-01

    The ANCF test bed is used for evaluating fan noise reduction concepts, developing noise measurement technologies, and providing a database for Aero-acoustic code development. Rig Capabilities: 4 foot 16 bladed rotor @ 2500 rpm, Auxiliary air delivery system (3 lbm/sec @ 6/12 psi), Variable configuration (rotor pitch angle, stator count/position, duct length), synthetic acoustic noise generation (tone/broadband). Measurement Capabilities: 112 channels dynamic data system, Unique rotating rake mode measuremen, Farfield (variable radius), Duct wall microphones, Stator vane microphones, Two component CTA w/ traversing, ESP for static pressures.

  11. Applications of the generalized information processing system (GIPSY)

    USGS Publications Warehouse

    Moody, D.W.; Kays, Olaf

    1972-01-01

    The Generalized Information Processing System (GIPSY) stores and retrieves variable-field, variable-length records consisting of numeric data, textual data, or codes. A particularly noteworthy feature of GIPSY is its ability to search records for words, word stems, prefixes, and suffixes as well as for numeric values. Moreover, retrieved records may be printed on pre-defined formats or formatted as fixed-field, fixed-length records for direct input to other-programs, which facilitates the exchange of data with other systems. At present there are some 22 applications of GIPSY falling in the general areas of bibliography, natural resources information, and management science, This report presents a description of each application including a sample input form, dictionary, and a typical formatted record. It is hoped that these examples will stimulate others to experiment with innovative uses of computer technology.

  12. A specific indel marker for the Philippines Schistosoma japonicum revealed by analysis of mitochondrial genome sequences.

    PubMed

    Li, Juan; Chen, Fen; Sugiyama, Hiromu; Blair, David; Lin, Rui-Qing; Zhu, Xing-Quan

    2015-07-01

    In the present study, near-complete mitochondrial (mt) genome sequences for Schistosoma japonicum from different regions in the Philippines and Japan were amplified and sequenced. Comparisons among S. japonicum from the Philippines, Japan, and China revealed a geographically based length difference in mt genomes, but the mt genomic organization and gene arrangement were the same. Sequence differences among samples from the Philippines and all samples from the three endemic areas were 0.57-2.12 and 0.76-3.85 %, respectively. The most variable part of the mt genome was the non-coding region. In the coding portion of the genome, protein-coding genes varied more than rRNA genes and tRNAs. The near-complete mt genome sequences for Philippine specimens were identical in length (14,091 bp) which was 4 bp longer than those of S. japonicum samples from Japan and China. This indel provides a unique genetic marker for S. japonicum samples from the Philippines. Phylogenetic analyses based on the concatenated amino acids of 12 protein-coding genes showed that samples of S. japonicum clustered according to their geographical origins. The identified mitochondrial indel marker will be useful for tracing the source of S. japonicum infection in humans and animals in Southeast Asia.

  13. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  14. Improving 1D Stellar Models with 3D Atmospheres

    NASA Astrophysics Data System (ADS)

    Mosumgaard, Jakob Rørsted; Silva Aguirre, Víctor; Weiss, Achim; Christensen-Dalsgaard, Jørgen; Trampedach, Regner

    2017-10-01

    Stellar evolution codes play a major role in present-day astrophysics, yet they share common issues. In this work we seek to remedy some of those by the use of results from realistic and highly detailed 3D hydrodynamical simulations of stellar atmospheres. We have implemented a new temperature stratification extracted directly from the 3D simulations into the Garching Stellar Evolution Code to replace the simplified atmosphere normally used. Secondly, we have implemented the use of a variable mixing-length parameter, which changes as a function of the stellar surface gravity and temperature - also derived from the 3D simulations. Furthermore, to make our models consistent, we have calculated new opacity tables to match the atmospheric simulations. Here, we present the modified code and initial results on stellar evolution using it.

  15. Inlet Turbulence and Length Scale Measurements in a Large Scale Transonic Turbine Cascade

    NASA Technical Reports Server (NTRS)

    Thurman, Douglas; Flegel, Ashlie; Giel, Paul

    2014-01-01

    Constant temperature hotwire anemometry data were acquired to determine the inlet turbulence conditions of a transonic turbine blade linear cascade. Flow conditions and angles were investigated that corresponded to the take-off and cruise conditions of the Variable Speed Power Turbine (VSPT) project and to an Energy Efficient Engine (EEE) scaled rotor blade tip section. Mean and turbulent flowfield measurements including intensity, length scale, turbulence decay, and power spectra were determined for high and low turbulence intensity flows at various Reynolds numbers and spanwise locations. The experimental data will be useful for establishing the inlet boundary conditions needed to validate turbulence models in CFD codes.

  16. Online Performance-Improvement Algorithms

    DTIC Science & Technology

    1994-08-01

    fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to

  17. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  18. On the optimality of a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.

    1993-01-01

    Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.

  19. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  20. Pseudo-polyprotein translated from the full-length ORF1 of capillovirus is important for pathogenicity, but a truncated ORF1 protein without variable and CP regions is sufficient for replication.

    PubMed

    Hirata, Hisae; Yamaji, Yasuyuki; Komatsu, Ken; Kagiwada, Satoshi; Oshima, Kenro; Okano, Yukari; Takahashi, Shuichiro; Ugaki, Masashi; Namba, Shigetou

    2010-09-01

    The first open-reading frame (ORF) of the genus Capillovirus encodes an apparently chimeric polyprotein containing conserved regions for replicase (Rep) and coat protein (CP), while other viruses in the family Flexiviridae have separate ORFs encoding these proteins. To investigate the role of the full-length ORF1 polyprotein of capillovirus, we generated truncation mutants of ORF1 of apple stem grooving virus by inserting a termination codon into the variable region located between the putative Rep- and CP-coding regions. These mutants were capable of systemic infection, although their pathogenicity was attenuated. In vitro translation of ORF1 produced both the full-length polyprotein and the smaller Rep protein. The results of in vivo reporter assays suggested that the mechanism of this early termination is a ribosomal -1 frame-shift occurring downstream from the conserved Rep domains. The mechanism of capillovirus gene expression and the very close evolutionary relationship between the genera Capillovirus and Trichovirus are discussed. Copyright (c) 2010. Published by Elsevier B.V.

  1. SEADYN Analysis of a Tow Line for a High Altitude Towed Glider

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.

    1996-01-01

    The concept of using a system, consisting of a tow aircraft, glider and tow line, which would enable subsonic flight at altitudes above 24 km (78 kft) has previously been investigated. The preliminary results from these studies seem encouraging. Under certain conditions these studies indicate the concept is feasible. However, the previous studies did not accurately take into account the forces acting on the tow line. Therefore in order to investigate the concept further a more detailed analysis was needed. The code that was selected was the SEADYN cable dynamics computer program which was developed at the Naval Facilities Engineering Service Center. The program is a finite element based structural analysis code that was developed over a period of 10 years. The results have been validated by the Navy in both laboratory and at actual sea conditions. This code was used to simulate arbitrarily-configured cable structures subjected to excitations encountered in real-world operations. The Navy's interest was mainly for modeling underwater tow lines, however the code is also usable for tow lines in air when the change in fluid properties is taken into account. For underwater applications the fluid properties are basically constant over the length of the tow line. For the tow aircraft/glider application the change in fluid properties is considerable along the length of the tow line. Therefore the code had to be modified in order to take into account the variation in atmospheric properties that would be encountered in this application. This modification consisted of adding a variable density to the fluid based on the altitude of the node being calculated. This change in the way the code handled the fluid density had no effect on the method of calculation or any other factor related to the codes validation.

  2. Comparing hospital outcomes between open and closed tibia fractures treated with intramedullary fixation.

    PubMed

    Smith, Evan J; Kuang, Xiangyu; Pandarinath, Rajeev

    2017-07-01

    Tibial shaft fractures comprise a large portion of operatively treated long bone fractures, and present with the highest rate of open injuries. Intramedullary fixation has become the standard of care for both open and closed injuries. The rates of short term complications and hospital length of stay for open and closed fractures treated with intramedullary fixation is not fully known. Previous series on tibia fractures were performed at high volume centers, and data were not generalizable, further they did not report on length of stay and the impact of preoperative variables on infections, complications and reoperation. We used a large surgical database to compare these outcomes while adjusting for preoperative risk factors. Data were extracted from the ACS-NSQIP database from 2005 to 2014. Cases were identified based on CPT codes for intramedullary fixation and categorized as closed vs open based on ICD9 code. In addition to demographic and case data, primary analysis examined correlation between open and closed fracture status with infection, complications, reoperation and hospital length of stay. Secondary analysis examined preoperative variables including gender, race, age, BMI, and diabetes effect on outcomes. There were 272 cases identified. There were no significant demographic differences between open and closed tibia fracture cases. Open fracture status did not increase the rate of infection, 30day complications, reoperation, or length of stay. The only preoperative factor that correlated with length of stay was age. There was no correlation between BMI, presence of insulin dependent and nondependent diabetes, and any outcome measure. When considering the complication rates for open and closed tibial shaft fractures treated with intramedullary fixation, there is no difference between 30-day complication rate, length of stay, or return to the operating room. Our reported postoperative infection rates were comparable to previous series, adding validity to our results. The heterogeneity of the hospitals included in ACS-NSQIP database allow our data to be generalizable. These methods may underrepresent the true occurrence of infection as operatively treated tibia infections may present late, requiring late revision. Despite limitations, the data reflect on the current burden of managing these once devastating injuries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  4. A combined paging alert and web-based instrument alters clinician behavior and shortens hospital length of stay in acute pancreatitis.

    PubMed

    Dimagno, Matthew J; Wamsteker, Erik-Jan; Rizk, Rafat S; Spaete, Joshua P; Gupta, Suraj; Sahay, Tanya; Costanzo, Jeffrey; Inadomi, John M; Napolitano, Lena M; Hyzy, Robert C; Desmond, Jeff S

    2014-03-01

    There are many published clinical guidelines for acute pancreatitis (AP). Implementation of these recommendations is variable. We hypothesized that a clinical decision support (CDS) tool would change clinician behavior and shorten hospital length of stay (LOS). Observational study, entitled, The AP Early Response (TAPER) Project. Tertiary center emergency department (ED) and hospital. Two consecutive samplings of patients having ICD-9 code (577.0) for AP were generated from the emergency department (ED) or hospital admissions. Diagnosis of AP was based on conventional Atlanta criteria. The Pre-TAPER-CDS-Tool group (5/30/06-6/22/07) had 110 patients presenting to the ED with AP per 976 ICD-9 (577.0) codes and the Post-TAPER-CDS-Tool group (5/30/06-6/22/07) had 113 per 907 ICD-9 codes (7/14/10-5/5/11). The TAPER-CDS-Tool, developed 12/2008-7/14/2010, is a combined early, automated paging-alert system, which text pages ED clinicians about a patient with AP and an intuitive web-based point-of-care instrument, consisting of seven early management recommendations. The pre- vs. post-TAPER-CDS-Tool groups had similar baseline characteristics. The post-TAPER-CDS-Tool group met two management goals more frequently than the pre-TAPER-CDS-Tool group: risk stratification (P<0.0001) and intravenous fluids >6L/1st 0-24 h (P=0.0003). Mean (s.d.) hospital LOS was significantly shorter in the post-TAPER-CDS-Tool group (4.6 (3.1) vs. 6.7 (7.0) days, P=0.0126). Multivariate analysis identified four independent variables for hospital LOS: the TAPER-CDS-Tool associated with shorter LOS (P=0.0049) and three variables associated with longer LOS: Japanese severity score (P=0.0361), persistent organ failure (P=0.0088), and local pancreatic complications (<0.0001). The TAPER-CDS-Tool is associated with changed clinician behavior and shortened hospital LOS, which has significant financial implications.

  5. The complete mitochondrial genome sequence of the Tibetan red fox (Vulpes vulpes montana).

    PubMed

    Zhang, Jin; Zhang, Honghai; Zhao, Chao; Chen, Lei; Sha, Weilai; Liu, Guangshuai

    2015-01-01

    In this study, the complete mitochondrial genome of the Tibetan red fox (Vulpes Vulpes montana) was sequenced for the first time using blood samples obtained from a wild female red fox captured from Lhasa in Tibet, China. Qinghai--Tibet Plateau is the highest plateau in the world with an average elevation above 3500 m. Sequence analysis showed it contains 12S rRNA gene, 16S rRNA gene, 22 tRNA genes, 13 protein-coding genes and 1 control region (CR). The variable tandem repeats in CR is the main reason of the length variability of mitochondrial genome among canide animals.

  6. Population-specific association of genes for telomere-associated proteins with longevity in an Italian population.

    PubMed

    Crocco, Paolina; Barale, Roberto; Rose, Giuseppina; Rizzato, Cosmeri; Santoro, Aurelia; De Rango, Francesco; Carrai, Maura; Fogar, Paola; Monti, Daniela; Biondi, Fiammetta; Bucci, Laura; Ostan, Rita; Tallaro, Federica; Montesanto, Alberto; Zambon, Carlo-Federico; Franceschi, Claudio; Canzian, Federico; Passarino, Giuseppe; Campa, Daniele

    2015-06-01

    Leukocyte telomere length (LTL) has been observed to be hereditable and correlated with longevity. However, contrasting results have been reported in different populations on the value of LTL heritability and on how biology of telomeres influences longevity. We investigated whether the variability of genes correlated to telomere maintenance is associated with telomere length and affects longevity in a population from Southern Italy (20-106 years). For this purpose we analyzed thirty-one polymorphisms in eight telomerase-associated genes of which twelve in the genes coding for the core enzyme (TERT and TERC) and the remaining in genes coding for components of the telomerase complex (TERF1, TERF2, TERF2IP, TNKS, TNKS2 and TEP1). We did not observe (after correcting for multiple testing) statistically significant associations between SNPs and LTL, possibly suggesting a low genetic influence of the variability of these genes on LTL in the elderly. On the other hand, we found that the variability of genes encoding for TERF1 and TNKS2, not directly involved in LTL, but important for keeping the integrity of the structure, shows a significant association with longevity. This suggests that the maintenance of these chromosomal structures may be critically important for preventing, or delaying, senescence and aging. Such a correlation was not observed in a population from northern Italy that we used as an independent replication set. This discrepancy is in line with previous reports regarding both the population specificity of results on telomere biology and the differences of aging in northern and southern Italy.

  7. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  8. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  9. Using the NASA GRC Sectored-One-Dimensional Combustor Simulation

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Mehta, Vishal R.

    2014-01-01

    The document is a user manual for the NASA GRC Sectored-One-Dimensional (S-1-D) Combustor Simulation. It consists of three sections. The first is a very brief outline of the mathematical and numerical background of the code along with a description of the non-dimensional variables on which it operates. The second section describes how to run the code and includes an explanation of the input file. The input file contains the parameters necessary to establish an operating point as well as the associated boundary conditions (i.e. how it is fed and terminated) of a geometrically configured combustor. It also describes the code output. The third section describes the configuration process and utilizes a specific example combustor to do so. Configuration consists of geometrically describing the combustor (section lengths, axial locations, and cross sectional areas) and locating the fuel injection point and flame region. Configuration requires modifying the source code and recompiling. As such, an executable utility is included with the code which will guide the requisite modifications and insure that they are done correctly.

  10. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  12. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    PubMed

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  13. Information quality measurement of medical encoding support based on usability.

    PubMed

    Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel

    2013-12-01

    Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  15. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  16. Optimizing the use of a sensor resource for opponent polarization coding

    PubMed Central

    Heras, Francisco J.H.

    2017-01-01

    Flies use specialized photoreceptors R7 and R8 in the dorsal rim area (DRA) to detect skylight polarization. R7 and R8 form a tiered waveguide (central rhabdomere pair, CRP) with R7 on top, filtering light delivered to R8. We examine how the division of a given resource, CRP length, between R7 and R8 affects their ability to code polarization angle. We model optical absorption to show how the length fractions allotted to R7 and R8 determine the rates at which they transduce photons, and correct these rates for transduction unit saturation. The rates give polarization signal and photon noise in R7, and in R8. Their signals are combined in an opponent unit, intrinsic noise added, and the unit’s output analysed to extract two measures of coding ability, number of discriminable polarization angles and mutual information. A very long R7 maximizes opponent signal amplitude, but codes inefficiently due to photon noise in the very short R8. Discriminability and mutual information are optimized by maximizing signal to noise ratio, SNR. At lower light levels approximately equal lengths of R7 and R8 are optimal because photon noise dominates. At higher light levels intrinsic noise comes to dominate and a shorter R8 is optimum. The optimum R8 length fractions falls to one third. This intensity dependent range of optimal length fractions corresponds to the range observed in different fly species and is not affected by transduction unit saturation. We conclude that a limited resource, rhabdom length, can be divided between two polarization sensors, R7 and R8, to optimize opponent coding. We also find that coding ability increases sub-linearly with total rhabdom length, according to the law of diminishing returns. Consequently, the specialized shorter central rhabdom in the DRA codes polarization twice as efficiently with respect to rhabdom length than the longer rhabdom used in the rest of the eye. PMID:28316880

  17. A long constraint length VLSI Viterbi decoder for the DSN

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.

    1988-01-01

    A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.

  18. SecureQEMU: Emulation-Based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    DTIC Science & Technology

    2008-12-01

    SHA256 DIGEST LENGTH) ) ; peAddSection(&sF i l e , " . S i g S t u b " , dwStubSecSize , dwStubSecSize ) ; 169 peSecure(&sF i l e , deqAddrSize...deqAuthPageAddrSize . s i z e ( ) /2) ∗ (8 + SHA256 DIGEST LENGTH) ) + 16 ; bCode [ 3 4 ] = ( ( char∗)&dwSize ) [ 0 ] ; bCode [ 3 5 ] = ( ( char∗)&dwSize ) [ 1...2) ∗ (8 + SHA256 DIGEST LENGTH... ) ) ; AES KEY aesKey ; unsigned char i v s a l t [ 1 6 ] , temp iv [ 1 6 ] ; 739 unsigned char ∗key

  19. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  20. Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields

    NASA Astrophysics Data System (ADS)

    Motazedi, Mohammad Reza; Dianat, Reza

    2017-02-01

    In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.

  1. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  2. Modulation and coding for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.

    1990-01-01

    Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.

  3. Molecular and morphologic approaches to discrimination of variability patterns in chub mackerel, Scomber japonicus.

    PubMed

    Roldán; Perrotta; Cortey; Pla

    2000-10-05

    The systematic status and the evolutionary biology of chub mackerel (Scomber japonicus) in the South West Atlantic Ocean is confusing with an unknown degree of genetic differentiation and reproductive isolation between units. Simultaneous genetic and morphologic analyses were made on 227 fish collected from two areas of the South West Atlantic Ocean and one from the Mediterranean Sea. The genetic analysis was based on 36 protein-coding loci, 16 of which were variable. The morphologic analyses include six morphometric length measurements and a meristic character. Correspondence between genetic and morphologic variability patterns indicates isolated Mediterranean and Southwest Atlantic subgroups of S. japonicus and, less clearly, possible additional divergence in two regional stocks within the latter group. The most conservative approach to management is to manage the stocks independently of one another.

  4. HippDB: a database of readily targeted helical protein-protein interactions.

    PubMed

    Bergey, Christina M; Watkins, Andrew M; Arora, Paramjit S

    2013-11-01

    HippDB catalogs every protein-protein interaction whose structure is available in the Protein Data Bank and which exhibits one or more helices at the interface. The Web site accepts queries on variables such as helix length and sequence, and it provides computational alanine scanning and change in solvent-accessible surface area values for every interfacial residue. HippDB is intended to serve as a starting point for structure-based small molecule and peptidomimetic drug development. HippDB is freely available on the web at http://www.nyu.edu/projects/arora/hippdb. The Web site is implemented in PHP, MySQL and Apache. Source code freely available for download at http://code.google.com/p/helidb, implemented in Perl and supported on Linux. arora@nyu.edu.

  5. Complete mitochondrial genomes of Trisidos kiyoni and Potiarca pilula: Varied mitochondrial genome size and highly rearranged gene order in Arcidae

    PubMed Central

    Sun, Shao’e; Li, Qi; Kong, Lingfeng; Yu, Hong

    2016-01-01

    We present the complete mitochondrial genomes (mitogenomes) of Trisidos kiyoni and Potiarca pilula, both important species from the family Arcidae (Arcoida: Arcacea). Typical bivalve mtDNA features were described, such as the relatively conserved gene number (36 and 37), a high A + T content (62.73% and 61.16%), the preference for A + T-rich codons, and the evidence of non-optimal codon usage. The mitogenomes of Arcidae species are exceptional for their extraordinarily large and variable sizes and substantial gene rearrangements. The mitogenome of T. kiyoni (19,614 bp) and P. pilula (28,470 bp) are the two smallest Arcidae mitogenomes. The compact mitogenomes are weakly associated with gene number and primarily reflect shrinkage of the non-coding regions. The varied size in Arcidae mitogenomes reflect a dynamic history of expansion. A significant positive correlation is observed between mitogenome size and the combined length of cox1-3, the lengths of Cytb, and the combined length of rRNAs (rrnS and rrnL) (P < 0.001). Both protein coding genes (PCGs) and tRNA rearrangements is observed in P. pilula and T. kiyoni mitogenomes. This analysis imply that the complicated gene rearrangement in mitochondrial genome could be considered as one of key characters in inferring higher-level phylogenetic relationship of Arcidae. PMID:27653979

  6. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  7. Investigation of the performance characteristics of Doppler radar technique for aircraft collision hazard warning, phase 3

    NASA Technical Reports Server (NTRS)

    1972-01-01

    System studies, equipment simulation, hardware development and flight tests which were conducted during the development of aircraft collision hazard warning system are discussed. The system uses a cooperative, continuous wave Doppler radar principle with pseudo-random frequency modulation. The report presents a description of the system operation and deals at length with the use of pseudo-random coding techniques. In addition, the use of mathematical modeling and computer simulation to determine the alarm statistics and system saturation characteristics in terminal area traffic of variable density is discussed.

  8. Comparison of Calculations and Measurements of the Off-Axis Radiation Dose (SI) in Liquid Nitrogen as a Function of Radiation Length.

    DTIC Science & Technology

    1984-12-01

    radiation lengths. The off-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured using thermal luminescent...various path lengths out to 2 radiation lengths. The cff-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured... using thermal luminescent dosimeters (TLD’s). Calculations were performed on a CDC-7600 computer at Los Alamos National Laboratory and measurements

  9. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  10. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  11. Short-term memory coding in children with intellectual disabilities.

    PubMed

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  12. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  13. Comparisons between Arabidopsis thaliana and Drosophila melanogaster in relation to Coding and Noncoding Sequence Length and Gene Expression

    PubMed Central

    Caldwell, Rachel; Lin, Yan-Xia; Zhang, Ren

    2015-01-01

    There is a continuing interest in the analysis of gene architecture and gene expression to determine the relationship that may exist. Advances in high-quality sequencing technologies and large-scale resource datasets have increased the understanding of relationships and cross-referencing of expression data to the large genome data. Although a negative correlation between expression level and gene (especially transcript) length has been generally accepted, there have been some conflicting results arising from the literature concerning the impacts of different regions of genes, and the underlying reason is not well understood. The research aims to apply quantile regression techniques for statistical analysis of coding and noncoding sequence length and gene expression data in the plant, Arabidopsis thaliana, and fruit fly, Drosophila melanogaster, to determine if a relationship exists and if there is any variation or similarities between these species. The quantile regression analysis found that the coding sequence length and gene expression correlations varied, and similarities emerged for the noncoding sequence length (5′ and 3′ UTRs) between animal and plant species. In conclusion, the information described in this study provides the basis for further exploration into gene regulation with regard to coding and noncoding sequence length. PMID:26114098

  14. Identification of a new genotype H wild-type mumps virus strain and its molecular relatedness to other virulent and attenuated strains.

    PubMed

    Amexis, Georgios; Rubin, Steven; Chatterjee, Nando; Carbone, Kathryn; Chumakov, Kostantin

    2003-06-01

    A single clinical isolate of mumps virus designated 88-1961 was obtained from a patient hospitalized with a clinical history of upper respiratory tract infection, parotitis, severe headache, fever and lymphadenopathy. We have sequenced the full-length genome of 88-1961 and compared it against all available full-length sequences of mumps virus. Based upon its nucleotide sequence of the SH gene 88-1961 was identified as a genotype H mumps strain. The overall extent of nucleotide and amino acid differences between each individual gene and protein of 88-1961 and the full-length mumps samples showed that the missense to silent ratios were unevenly distributed. Upon evaluation of the consensus sequence of 88-1961, four positions were found to be clearly heterogeneous at the nucleotide level (NP 315C/T, NP 318C/T, F 271A/C, and HN 855C/T). Sequence analysis revealed that the amino acid sequences for the NP, M, and the L protein were the most conserved, whereas the SH protein exhibited the highest variability among the compared mumps genotypes A, B, and G. No identifying molecular patterns in the non-coding (intergenic) or coding regions of 88-1961 were found when we compared it against relatively virulent (Urabe AM9 B, Glouc1/UK96, 87-1004 and 87-1005) and non-virulent mumps strains (Jeryl Lynn and all Urabe Am9 A substrains). Copyright 2003 Wiley-Liss, Inc.

  15. Resource utilization in primary repair of cleft palate.

    PubMed

    Owusu, James A; Liu, Meixia; Sidman, James D; Scott, Andrew R

    2013-03-01

    To estimate the current incidence of cleft palate in the United States and to determine national variations in resource utilization for primary repair of cleft palate. Retrospective analysis of a national, pediatric database (2009 Kids Inpatient Database). Patients aged 3 and below admitted for cleft palate repair were selected, using ICD-9 codes for cleft palate and procedure code for primary (initial) repair of cleft palate. A number of demographic variables were analyzed, and hospital charges were considered as a measure of resource utilization. Primary repair of cleft palate was performed on 1,943 patients. The estimated incidence was 0.11% with male to female ratio of 1.2:1. Regional incidence ranged from 0.09% (Northeast) to 0.12% (Midwest). The mean age at surgery was 13.4 months. The average length of stay was 1.9 days. The average total charge nationwide was $22,982, ranging from $17,972 (South) to $25,671 (Northeast). Average charge in a teaching institution was $4,925 higher than for nonteaching institutions. The strongest predictor of charge was length of stay, increasing charge by $7,663 for every additional hospital day (P < 0.01). National variations exist in resource utilization for primary repair of cleft palate, with higher charges in Northeastern states and teaching hospitals. The strongest predictor of increased resource use was length of stay, which was significantly higher at teaching institutions. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  16. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    PubMed

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  17. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  18. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.

    1998-01-01

    It is well known that the BER performance of a parallel concatenated turbo-code improves roughly as 1/N, where N is the information block length. However, it has been observed by Benedetto and Montorsi that for most parallel concatenated turbo-codes, the FER performance does not improve monotonically with N. In this report, we study the FER of turbo-codes, and the effects of their concatenation with an outer code. Two methods of concatenation are investigated: across several frames and within each frame. Some asymmetric codes are shown to have excellent FER performance with an information block length of 16384. We also show that the proposed outer coding schemes can improve the BER performance as well by eliminating pathological frames generated by the iterative MAP decoding process.

  19. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  20. Some practical universal noiseless coding techniques, part 3, module PSl14,K+

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1991-01-01

    The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.

  1. Performance Analysis of New Binary User Codes for DS-CDMA Communication

    NASA Astrophysics Data System (ADS)

    Usha, Kamle; Jaya Sankar, Kottareddygari

    2016-03-01

    This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.

  2. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Viterbi decoding for satellite and space communication.

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Jacobs, I. M.

    1971-01-01

    Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

  4. SAW correlator spread spectrum receiver

    DOEpatents

    Brocato, Robert W

    2014-04-01

    A surface acoustic wave (SAW) correlator spread-spectrum (SS) receiver is disclosed which utilizes a first demodulation stage with a chip length n and a second demodulation stage with a chip length m to decode a transmitted SS signal having a code length l=n.times.m which can be very long (e.g. up to 2000 chips or more). The first demodulation stage utilizes a pair of SAW correlators which demodulate the SS signal to generate an appropriate code sequence at an intermediate frequency which can then be fed into the second demodulation stage which can be formed from another SAW correlator, or by a digital correlator. A compound SAW correlator comprising two input transducers and a single output transducer is also disclosed which can be used to form the SAW correlator SS receiver, or for use in processing long code length signals.

  5. Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study

    NASA Technical Reports Server (NTRS)

    Paterson, R. W.

    1982-01-01

    An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.

  6. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  7. Argument structure and the representation of abstract semantics.

    PubMed

    Rodríguez-Ferreiro, Javier; Andreu, Llorenç; Sanz-Torrent, Mònica

    2014-01-01

    According to the dual coding theory, differences in the ease of retrieval between concrete and abstract words are related to the exclusive dependence of abstract semantics on linguistic information. Argument structure can be considered a measure of the complexity of the linguistic contexts that accompany a verb. If the retrieval of abstract verbs relies more on the linguistic codes they are associated to, we could expect a larger effect of argument structure for the processing of abstract verbs. In this study, sets of length- and frequency-matched verbs including 40 intransitive verbs, 40 transitive verbs taking simple complements, and 40 transitive verbs taking sentential complements were presented in separate lexical and grammatical decision tasks. Half of the verbs were concrete and half were abstract. Similar results were obtained in the two tasks, with significant effects of imageability and transitivity. However, the interaction between these two variables was not significant. These results conflict with hypotheses assuming a stronger reliance of abstract semantics on linguistic codes. In contrast, our data are in line with theories that link the ease of retrieval with availability and robustness of semantic information.

  8. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  9. Desert bird associations with broad-scale boundary length: Applications in avian conservation

    USGS Publications Warehouse

    Gutzwiller, K.J.; Barrow, W.C.

    2008-01-01

    1. Current understanding regarding the effects of boundaries on bird communities has originated largely from studies of forest-non-forest boundaries in mesic systems. To assess whether broad-scale boundary length can affect bird community structure in deserts, and to identify patterns and predictors of species' associations useful in avian conservation, we studied relations between birds and boundary-length variables in Chihuahuan Desert landscapes. Operationally, a boundary was the border between two adjoining land covers, and broad-scale boundary length was the total length of such borders in a large area. 2. Within 2-km radius areas, we measured six boundary-length variables. We analysed bird-boundary relations for 26 species, tested for assemblage-level patterns in species' associations with boundary-length variables, and assessed whether body size, dispersal ability and cowbird-host status were correlates of these associations. 3. The abundances or occurrences of a significant majority of species were associated with boundary-length variables, and similar numbers of species were related positively and negatively to boundary-length variables. 4. Disproportionately small numbers of species were correlated with total boundary length, land-cover boundary length and shrubland-grassland boundary length (variables responsible for large proportions of boundary length). Disproportionately large numbers of species were correlated with roadside boundary length and riparian vegetation-grassland boundary length (variables responsible for small proportions of boundary length). Roadside boundary length was associated (positively and negatively) with the most species. 5. Species' associations with boundary-length variables were not correlated with body size, dispersal ability or cowbird-host status. 6. Synthesis and applications. For the species we studied, conservationists can use the regressions we report as working models to anticipate influences of boundary-length changes on bird abundance and occurrence, and to assess avifaunal composition for areas under consideration for protection. Boundary-length variables associated with a disproportionate or large number of species can be used as foci for landscape management. Assessing the underlying causes of bird-boundary relations may improve the prediction accuracy of associated models. We therefore advocate local- and broad-scale manipulative experiments involving the boundary types with which species were correlated, as indicated by the regressions. ?? 2008 The Authors.

  10. Resident Participation in Fixation of Intertrochanteric Hip Fractures: Analysis of the NSQIP Database.

    PubMed

    Neuwirth, Alexander L; Stitzlein, Russell N; Neuwirth, Madalyn G; Kelz, Rachel K; Mehta, Samir

    2018-01-17

    Future generations of orthopaedic surgeons must continue to be trained in the surgical management of hip fractures. This study assesses the effect of resident participation on outcomes for the treatment of intertrochanteric hip fractures. The National Surgical Quality Improvement Program (NSQIP) database (2010 to 2013) was queried for intertrochanteric hip fractures (International Classification of Diseases, 9th Revision, Clinical Modification [ICD-9-CM] code 820.21) treated with either extramedullary (Current Procedural Terminology [CPT] code 27244) or intramedullary (CPT code 27245) fixation. Demographic variables, including resident participation, as well as primary (death and serious morbidity) and secondary outcome variables were extracted for analysis. Univariate, propensity score-matched, and multivariate logistic regression analyses were performed to evaluate outcome variables. Data on resident participation were available for 1,764 cases (21.0%). Univariate analyses for all intertrochanteric hip fractures demonstrated no significant difference in 30-day mortality (6.3% versus 7.8%; p = 0.264) or serious morbidity (44.9% versus 43.2%; p = 0.506) between the groups with and without resident participation. Multivariate and propensity score-matched analyses gave similar results. Resident involvement was associated with prolonged operating-room time, length of stay, and time to discharge when a prolonged case was defined as one above the 90th percentile for time parameters. Resident participation was not associated with an increase in morbidity or mortality but was associated with an increase in time-related secondary outcome measures. While attending surgeon supervision is necessary, residents can and should be involved in the care of these patients without concern that resident involvement negatively impacts perioperative morbidity and mortality. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.

  11. Increased length of inpatient stay and poor clinical coding: audit of patients with diabetes.

    PubMed

    Daultrey, Harriet; Gooday, Catherine; Dhatariya, Ketan

    2011-11-01

    People with diabetes stay in hospital for longer than those without diabetes for similar conditions. Clinical coding is poor across all specialties. Inpatients with diabetes often have unrecognized foot problems. We wanted to look at the relationships between these factors. A single day audit, looking at the prevalence of diabetes in all adult inpatients. Also looking at their feet to find out how many were high-risk or had existing problems. A 998-bed university teaching hospital. All adult inpatients. (a) To see if patients with diabetes and foot problems were in hospital for longer than the national average length of stay compared with national data; (b) to see if there were people in hospital with acute foot problems who were not known to the specialist diabetic foot team; and (c) to assess the accuracy of clinical coding. We identified 110 people with diabetes. However, discharge coding data for inpatients on that day showed 119 people with diabetes. Length of stay (LOS) was substantially higher for those with diabetes compared to those without (± SD) at 22.39 (22.26) days, vs. 11.68 (6.46) (P < 0.001). Finally, clinical coding was poor with some people who had been identified as having diabetes on the audit, who were not coded as such on discharge. Clinical coding - which is dependent on discharge summaries - poorly reflects diagnoses. Additionally, length of stay is significantly longer than previous estimates. The discrepancy between coding and diagnosis needs addressing by increasing the levels of awareness and education of coders and physicians. We suggest that our data be used by healthcare planners when deciding on future tariffs.

  12. Increased length of inpatient stay and poor clinical coding: audit of patients with diabetes

    PubMed Central

    Daultrey, Harriet; Gooday, Catherine; Dhatariya, Ketan

    2011-01-01

    Objectives People with diabetes stay in hospital for longer than those without diabetes for similar conditions. Clinical coding is poor across all specialties. Inpatients with diabetes often have unrecognized foot problems. We wanted to look at the relationships between these factors. Design A single day audit, looking at the prevalence of diabetes in all adult inpatients. Also looking at their feet to find out how many were high-risk or had existing problems. Setting A 998-bed university teaching hospital. Participants All adult inpatients. Main outcome measures (a) To see if patients with diabetes and foot problems were in hospital for longer than the national average length of stay compared with national data; (b) to see if there were people in hospital with acute foot problems who were not known to the specialist diabetic foot team; and (c) to assess the accuracy of clinical coding. Results We identified 110 people with diabetes. However, discharge coding data for inpatients on that day showed 119 people with diabetes. Length of stay (LOS) was substantially higher for those with diabetes compared to those without (± SD) at 22.39 (22.26) days, vs. 11.68 (6.46) (P < 0.001). Finally, clinical coding was poor with some people who had been identified as having diabetes on the audit, who were not coded as such on discharge. Conclusion Clinical coding – which is dependent on discharge summaries – poorly reflects diagnoses. Additionally, length of stay is significantly longer than previous estimates. The discrepancy between coding and diagnosis needs addressing by increasing the levels of awareness and education of coders and physicians. We suggest that our data be used by healthcare planners when deciding on future tariffs. PMID:22140609

  13. Resource utilization in primary repair of cleft lip.

    PubMed

    Owusu, James A; Liu, Meixia; Sidman, James D; Scott, Andrew R

    2013-03-01

    To determine national variations in resource utilization for primary repair of cleft lip, identify patient and institutional factors associated with high resource use, and estimate the current incidence of cleft lip in the United States. Retrospective analysis of a national, pediatric database (2009 Kids' Inpatient Database [KID]). Patients aged 1 year and younger were selected using international classification of disease codes for cleft lip and procedure codes for cleft lip repair. A number of demographic variables were analyzed, and hospital charges were considered as a measure of resource utilization. There were 1318 patients identified. The national incidence was 0.09%, with a male to female ratio of 1.8:1. Regional incidence varied from 0.07% (Northeast) to 0.10% (West). The mean age at surgery was 4.2 months. The average length of stay was 1.4 days. The national average hospital charge was $20,147, ranging from $14,635 (South) to $23,663 (West). Teaching hospitals charge an average of $9764 higher than nonteaching hospitals. The strongest predictor of charge was length of stay, increasing charge by $8102 for every additional hospital day (P < .01). Regional variations exist in resource utilization for primary cleft lip repair. Resource use is higher in the West and among teaching hospitals.

  14. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  15. Variable-Length Computerized Adaptive Testing: Adaptation of the A-Stratified Strategy in Item Selection with Content Balancing

    ERIC Educational Resources Information Center

    Huo, Yan

    2009-01-01

    Variable-length computerized adaptive testing (CAT) can provide examinees with tailored test lengths. With the fixed standard error of measurement ("SEM") termination rule, variable-length CAT can achieve predetermined measurement precision by using relatively shorter tests compared to fixed-length CAT. To explore the application of…

  16. Utilization trends in inpatient endoscopic retrograde cholangiopancreatography (ERCP): A cross-sectional US experience

    PubMed Central

    Ahmed, Moiz; Kanotra, Ritesh; Savani, Ghanshyambhai T.; Kotadiya, Fenilkumar; Patel, Nileshkumar; Tareen, Sarah; Fasullo, Matthew J.; Kesavan, Mayurathan; Kahn, Ahsan; Nalluri, Nikhil; Khan, Hafiz M.; Pau, Dhaval; Abergel, Jeffrey; Deeb, Liliane; Andrawes, Sherif; Das, Ananya

    2017-01-01

    Study aims The goal of our study was to determine the current trends for inpatient utilization for endoscopic retrograde cholangiopancreatography (ERCP) and its economic impact in the United States between 2002 and 2013. Patients and methods A Nationwide Inpatient Sample from 2002 through 2013 was examined. We identified ERCPs using International Classification of Diseases (ICD-9) codes; Procedure codes 51.10, 51.11, 52.13, 51.14, 51.15, 52.14 and 52.92 for diagnostic and 51.84, 51.86, 52.97 were studied. Rate of inpatient ERCP was calculated. The trends for therapeutic ERCPs were compared to the diagnostic ones. We analyzed patient and hospital characteristics, length of hospital stay, and cost of care after adjusting for weighted samples. We used the Cochran-Armitage test for categorical variables and linear regression for continuous variables. Results A total of 411,409 ERCPs were performed from 2002 to 2013. The mean age was 59 ± 19 years; 61 % were female and 57 % were white. The total numbers of ERCPS increased by 12 % from 2002 to 2011, which was followed by a 10 % decrease in the number of ERCPs between 2011 and 2013. There was a significant increase in therapeutic ERCPs by 37 %, and a decrease in diagnostic ERCPs by 57 % from 2002 to 2013. Mean length of stay was 7 days (SE = 0.01) and the mean cost of hospitalization was $20,022 (SE = 41). Conclusions Our large cross-sectional study shows a significant shift in ERCPs towards therapeutic indications and a decline in its conventional diagnostic utility. Overall there has been a reduction in inpatient ERCPs. PMID:28382324

  17. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  18. Administrative Databases Can Yield False Conclusions-An Example of Obesity in Total Joint Arthroplasty.

    PubMed

    George, Jaiben; Newman, Jared M; Ramanathan, Deepak; Klika, Alison K; Higuera, Carlos A; Barsoum, Wael K

    2017-09-01

    Research using large administrative databases has substantially increased in recent years. Accuracy with which comorbidities are represented in these databases has been questioned. The purpose of this study was to evaluate the extent of errors in obesity coding and its impact on arthroplasty research. Eighteen thousand thirty primary total knee arthroplasties (TKAs) and 10,475 total hip arthroplasties (THAs) performed at a single healthcare system from 2004-2014 were included. Patients were classified as obese or nonobese using 2 methods: (1) body mass index (BMI) ≥30 kg/m 2 and (2) international classification of disease, 9th edition codes. Length of stay, operative time, and 90-day complications were collected. Effect of obesity on various outcomes was analyzed separately for both BMI- and coding-based obesity. From 2004 to 2014, the prevalence of BMI-based obesity increased from 54% to 63% and 40% to 45% in TKA and THA, respectively. The prevalence of coding-based obesity increased from 15% to 28% and 8% to 17% in TKA and THA, respectively. Coding overestimated the growth of obesity in TKA and THA by 5.6 and 8.4 times, respectively. When obesity was defined by coding, obesity was falsely shown to be a significant risk factor for deep vein thrombosis (TKA), pulmonary embolism (THA), and longer hospital stay (TKA and THA). The growth in obesity observed in administrative databases may be an artifact because of improvements in coding over the years. Obesity defined by coding can overestimate the actual effect of obesity on complications after arthroplasty. Therefore, studies using large databases should be interpreted with caution, especially when variables prone to coding errors are involved. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The Mitochondrial Cytochrome Oxidase Subunit I Gene Occurs on a Minichromosome with Extensive Heteroplasmy in Two Species of Chewing Lice, Geomydoecus aurei and Thomomydoecus minor

    PubMed Central

    Pietan, Lucas L.; Spradling, Theresa A.

    2016-01-01

    In animals, mitochondrial DNA (mtDNA) typically occurs as a single circular chromosome with 13 protein-coding genes and 22 tRNA genes. The various species of lice examined previously, however, have shown mitochondrial genome rearrangements with a range of chromosome sizes and numbers. Our research demonstrates that the mitochondrial genomes of two species of chewing lice found on pocket gophers, Geomydoecus aurei and Thomomydoecus minor, are fragmented with the 1,536 base-pair (bp) cytochrome-oxidase subunit I (cox1) gene occurring as the only protein-coding gene on a 1,916–1,964 bp minicircular chromosome in the two species, respectively. The cox1 gene of T. minor begins with an atypical start codon, while that of G. aurei does not. Components of the non-protein coding sequence of G. aurei and T. minor include a tRNA (isoleucine) gene, inverted repeat sequences consistent with origins of replication, and an additional non-coding region that is smaller than the non-coding sequence of other lice with such fragmented mitochondrial genomes. Sequences of cox1 minichromosome clones for each species reveal extensive length and sequence heteroplasmy in both coding and noncoding regions. The highly variable non-gene regions of G. aurei and T. minor have little sequence similarity with one another except for a 19-bp region of phylogenetically conserved sequence with unknown function. PMID:27589589

  20. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  1. Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory.

    PubMed

    Acelam, Philip A

    2015-01-01

    To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R (2)=0.01; P=0.20) and obesity (R (2)=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R (2)=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations.

  2. Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory

    PubMed Central

    Acelam, Philip A

    2015-01-01

    Objective To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. Materials and methods In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. Results The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R2=0.01; P=0.20) and obesity (R2=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. Conclusion There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R2=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations. PMID:26317082

  3. Desmoglein 4 diversity and correlation analysis with coat color in goat.

    PubMed

    E, G X; Zhao, Y J; Ma, Y H; Cao, G L; He, J N; Na, R S; Zhao, Z Q; Jiang, C D; Zhang, J H; Arlvd, S; Chen, L P; Qiu, X Y; Hu, W; Huang, Y F

    2016-03-04

    Desmoglein 4 (DSG4) has an important role in the development of wool traits in domestic animals. The full-length DSG4 gene, which contains 3918 bp, a complete open-reading-frame, and encodes a 1040-amino acid protein, was amplified from Liaoning cashmere goat. The sequence was compared with that of DSG4 from other animals and the results show that the DSG4 coding region is consistent with interspecies conservation. Thirteen single-nucleotide polymorphisms (SNPs) were identified in a highly variable region of DSG4, and one SNP (M-1, G>T) was significantly correlated with white and black coat color in goat. Haplotype distribution of the highly variable region of DSG4 was assessed in 179 individuals from seven goat breeds to investigate its association with coat color and its differentiation among populations. However, the lack of a signature result indicates DGS4 haplotypes related with the color of goat coat.

  4. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  5. Augmented burst-error correction for UNICON laser memory. [digital memory

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.

  6. Implementation of a tree algorithm in MCNP code for nuclear well logging applications.

    PubMed

    Li, Fusheng; Han, Xiaogang

    2012-07-01

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    PubMed

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  8. Structure-related statistical singularities along protein sequences: a correlation study.

    PubMed

    Colafranceschi, Mauro; Colosimo, Alfredo; Zbilut, Joseph P; Uversky, Vladimir N; Giuliani, Alessandro

    2005-01-01

    A data set composed of 1141 proteins representative of all eukaryotic protein sequences in the Swiss-Prot Protein Knowledge base was coded by seven physicochemical properties of amino acid residues. The resulting numerical profiles were submitted to correlation analysis after the application of a linear (simple mean) and a nonlinear (Recurrence Quantification Analysis, RQA) filter. The main RQA variables, Recurrence and Determinism, were subsequently analyzed by Principal Component Analysis. The RQA descriptors showed that (i) within protein sequences is embedded specific information neither present in the codes nor in the amino acid composition and (ii) the most sensitive code for detecting ordered recurrent (deterministic) patterns of residues in protein sequences is the Miyazawa-Jernigan hydrophobicity scale. The most deterministic proteins in terms of autocorrelation properties of primary structures were found (i) to be involved in protein-protein and protein-DNA interactions and (ii) to display a significantly higher proportion of structural disorder with respect to the average data set. A study of the scaling behavior of the average determinism with the setting parameters of RQA (embedding dimension and radius) allows for the identification of patterns of minimal length (six residues) as possible markers of zones specifically prone to inter- and intramolecular interactions.

  9. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  10. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  11. Using Different Standardized Methods for Species Identification: A Case Study Using Beaks from Three Ommastrephid Species

    NASA Astrophysics Data System (ADS)

    Hu, Guanyu; Fang, Zhou; Liu, Bilin; Chen, Xinjun; Staples, Kevin; Chen, Yong

    2018-04-01

    The cephalopod beak is a vital hard structure with a stable configuration and has been widely used for the identification of cephalopod species. This study was conducted to determine the best standardization method for identifying different species by measuring 12 morphological variables of the beaks of Illex argentinus, Ommastrephes bartramii, and Dosidicus gigas that were collected by Chinese jigging vessels. To remove the effects of size, these morphometric variables were standardized using three methods. The average ratios of the upper beak morphological variables and upper crest length of O. bartramii and D. gigas were found to be greater than those of I. argentinus. However, for lower beaks, only the average of LRL (lower rostrum length)/ LCL (lower crest length), LRW (lower rostrum width)/ LCL, and LLWL (lower lateral wall length)/ LCL of O. bartramii and D. gigas were greater than those of I. argentinus. The ratios of beak morphological variables and crest length were found to be all significantly different among the three species ( P < 0.001). Among the three standardization methods, the correct classification rate of stepwise discriminant analysis (SDA) was the highest using the ratios of beak morphological variables and crest length. Compared with hood length, the correct classification rate was slightly higher when using beak variables standardized by crest length using an allometric model. The correct classification rate of the lower beak was also found to be greater than that of the upper beak. This study indicates that the ratios of beak morphological variables to crest length could be used for interspecies and intraspecies identification. Meanwhile, the lower beak variables were found to be more effective than upper beak variables in classifying beaks found in the stomachs of predators.

  12. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  13. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    The performance of bandwidth efficient trellis codes on channels with phase jitter, or those disturbed by jamming and impulse noise is analyzed. An heuristic algorithm for construction of bandwidth efficient trellis codes with any constraint length up to about 30, any signal constellation, and any code rate was developed. Construction of good distance profile trellis codes for sequential decoding and comparison of random coding bounds of trellis coded modulation schemes are also discussed.

  14. Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.

    PubMed

    Joseph, Jacob R; Smith, Brandon W; Park, Paul

    2017-01-01

    National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  16. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  17. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  18. The complete mitochondrial genome of Rapana venosa (Gastropoda, Muricidae).

    PubMed

    Sun, Xiujun; Yang, Aiguo

    2016-01-01

    The complete mitochondrial (mt) genome of the veined rapa whelk, Rapana venosa, was determined using genome walking techniques in this study. The total length of the mt genome sequence of R. venosa was 15,271 bp, which is comparable to the reported Muricidae mitogenomes to date. It contained 13 protein-coding genes, 21 transfer RNA genes, and two ribosomal RNA genes. A bias towards a higher representation of nucleotides A and T (69%) was detected in the mt genome of R. venosa. A small number of non-coding nucleotides (302 bp) was detected, and the largest non-coding region was 74 bp in length.

  19. 27 CFR 53.96 - Constructive sale price; special rule for arm's-length sales.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...; special rule for arm's-length sales. 53.96 Section 53.96 Alcohol, Tobacco Products and Firearms ALCOHOL... sale price; special rule for arm's-length sales. (a) In general. Section 4216(b)(2) of the Code... distributors in arm's-length transactions, and the manufacturer establishes that its prices in such cases are...

  20. 27 CFR 53.96 - Constructive sale price; special rule for arm's-length sales.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...; special rule for arm's-length sales. 53.96 Section 53.96 Alcohol, Tobacco Products and Firearms ALCOHOL... sale price; special rule for arm's-length sales. (a) In general. Section 4216(b)(2) of the Code... distributors in arm's-length transactions, and the manufacturer establishes that its prices in such cases are...

  1. 27 CFR 53.96 - Constructive sale price; special rule for arm's-length sales.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...; special rule for arm's-length sales. 53.96 Section 53.96 Alcohol, Tobacco Products and Firearms ALCOHOL... sale price; special rule for arm's-length sales. (a) In general. Section 4216(b)(2) of the Code... distributors in arm's-length transactions, and the manufacturer establishes that its prices in such cases are...

  2. 27 CFR 53.96 - Constructive sale price; special rule for arm's-length sales.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...; special rule for arm's-length sales. 53.96 Section 53.96 Alcohol, Tobacco Products and Firearms ALCOHOL... sale price; special rule for arm's-length sales. (a) In general. Section 4216(b)(2) of the Code... distributors in arm's-length transactions, and the manufacturer establishes that its prices in such cases are...

  3. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  4. Cost-effective sequencing of full-length cDNA clones powered by a de novo-reference hybrid assembly.

    PubMed

    Kuroshu, Reginaldo M; Watanabe, Junichi; Sugano, Sumio; Morishita, Shinichi; Suzuki, Yutaka; Kasahara, Masahiro

    2010-05-07

    Sequencing full-length cDNA clones is important to determine gene structures including alternative splice forms, and provides valuable resources for experimental analyses to reveal the biological functions of coded proteins. However, previous approaches for sequencing cDNA clones were expensive or time-consuming, and therefore, a fast and efficient sequencing approach was demanded. We developed a program, MuSICA 2, that assembles millions of short (36-nucleotide) reads collected from a single flow cell lane of Illumina Genome Analyzer to shotgun-sequence approximately 800 human full-length cDNA clones. MuSICA 2 performs a hybrid assembly in which an external de novo assembler is run first and the result is then improved by reference alignment of shotgun reads. We compared the MuSICA 2 assembly with 200 pooled full-length cDNA clones finished independently by the conventional primer-walking using Sanger sequencers. The exon-intron structure of the coding sequence was correct for more than 95% of the clones with coding sequence annotation when we excluded cDNA clones insufficiently represented in the shotgun library due to PCR failure (42 out of 200 clones excluded), and the nucleotide-level accuracy of coding sequences of those correct clones was over 99.99%. We also applied MuSICA 2 to full-length cDNA clones from Toxoplasma gondii, to confirm that its ability was competent even for non-human species. The entire sequencing and shotgun assembly takes less than 1 week and the consumables cost only approximately US$3 per clone, demonstrating a significant advantage over previous approaches.

  5. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  6. Error Correcting Codes and Related Designs

    DTIC Science & Technology

    1990-09-30

    Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group

  7. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  8. Correlations of Handgrip Strength with Selected Hand-Arm-Anthropometric Variables in Indian Inter-university Female Volleyball Players

    PubMed Central

    Koley, Shyamal; Pal Kaur, Satinder

    2011-01-01

    Purpose The purpose of this study was to estimate the dominant handgrip strength and its correlations with some hand and arm anthropometric variables in 101 randomly selected Indian inter-university female volleyball players aged 18-25 years (mean age 20.52±1.40) from six Indian universities. Methods Three anthropometric variables, i.e. height, weight, BMI, two hand anthropometric variables, viz. right and left hand width and length, four arm anthropometric variables, i.e. upper arm length, lower arm length, upper extremity length, upper arm circumference and dominant right and non-dominant handgrip strength were measured among Indian inter-university female volleyball players by standard anthropometric techniques. Results The findings of the present study indicated that Indian female volleyball players had higher mean values in eleven variables and lesser mean values in two variables than their control counterparts, showing significant differences (P<0.032-0.001) in height (t=2.63), weight (t=8.66), left hand width (t=2.10), left and right hand length (t=9.99 and 10.40 respectively), right upper arm length (t=8.48), right forearm length (t=5.41), dominant (right) and non-dominant (left) handgrip strength (t=9.37 and 6.76 respectively). In female volleyball players, dominant handgrip strength had significantly positive correlations (P=0.01) with all the variables studied. Conclusion It may be concluded that dominant handgrip strength had strong positive correlations with all the variables studied in Indian inter-university female volleyball players. PMID:22375242

  9. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  10. Are neighborhood-level characteristics associated with indoor allergens in the household?

    PubMed

    Rosenfeld, Lindsay; Rudd, Rima; Chew, Ginger L; Emmons, Karen; Acevedo-García, Dolores

    2010-02-01

    Individual home characteristics have been associated with indoor allergen exposure; however, the influence of neighborhood-level characteristics has not been well studied. We defined neighborhoods as community districts determined by the New York City Department of City Planning. We examined the relationship between neighborhood-level characteristics and the presence of dust mite (Der f 1), cat (Fel d 1), cockroach (Bla g 2), and mouse (MUP) allergens in the household. Using data from the Puerto Rican Asthma Project, a birth cohort of Puerto Rican children at risk of allergic sensitization (n = 261), we examined associations between neighborhood characteristics (percent tree canopy, asthma hospitalizations per 1,000 children, roadway length within 100 meters of buildings, serious housing code violations per 1000 rental units, poverty rates, and felony crime rates), and the presence of indoor allergens. Allergen cutpoints were used for categorical analyses and defined as follows: dust mite: >0.25 microg/g; cat: >1 microg/g; cockroach: >1 U/g; mouse: >1.6 microg/g. Serious housing code violations were statistically significantly positively associated with dust mite, cat, and mouse allergens (continuous variables), adjusting for mother's income and education, and all neighborhood-level characteristics. In multivariable logistic regression analyses, medium levels of housing code violations were associated with higher dust mite and cat allergens (1.81, 95%CI: 1.08, 3.03 and 3.10, 95%CI: 1.22, 7.92, respectively). A high level of serious housing code violations was associated with higher mouse allergen (2.04, 95%CI: 1.15, 3.62). A medium level of housing code violations was associated with higher cockroach allergen (3.30, 95%CI: 1.11, 9.78). Neighborhood-level characteristics, specifically housing code violations, appear to be related to indoor allergens, which may have implications for future research explorations and policy decisions.

  11. Concatenated coding for low date rate space communications.

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1972-01-01

    In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.

  12. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  13. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.

    1997-01-01

    Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.

  14. Complexity, information loss, and model building: from neuro- to cognitive dynamics

    NASA Astrophysics Data System (ADS)

    Arecchi, F. Tito

    2007-06-01

    A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.

  15. Design and System Implications of a Family of Wideband HF Data Waveforms

    DTIC Science & Technology

    2010-09-01

    code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was

  16. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  17. Identification of a Conserved Non-Protein-Coding Genomic Element that Plays an Essential Role in Alphabaculovirus Pathogenesis

    PubMed Central

    Kikhno, Irina

    2014-01-01

    Highly homologous sequences 154–157 bp in length grouped under the name of “conserved non-protein-coding element” (CNE) were revealed in all of the sequenced genomes of baculoviruses belonging to the genus Alphabaculovirus. A CNE alignment led to the detection of a set of highly conserved nucleotide clusters that occupy strictly conserved positions in the CNE sequence. The significant length of the CNE and conservation of both its length and cluster architecture were identified as a combination of characteristics that make this CNE different from known viral non-coding functional sequences. The essential role of the CNE in the Alphabaculovirus life cycle was demonstrated through the use of a CNE-knockout Autographa californica multiple nucleopolyhedrovirus (AcMNPV) bacmid. It was shown that the essential function of the CNE was not mediated by the presumed expression activities of the protein- and non-protein-coding genes that overlap the AcMNPV CNE. On the basis of the presented data, the AcMNPV CNE was categorized as a complex-structured, polyfunctional genomic element involved in an essential DNA transaction that is associated with an undefined function of the baculovirus genome. PMID:24740153

  18. Two fundamental questions about protein evolution.

    PubMed

    Penny, David; Zhong, Bojian

    2015-12-01

    Two basic questions are considered that approach protein evolution from different directions; the problems arising from using Markov models for the deeper divergences, and then the origin of proteins themselves. The real problem for the first question (going backwards in time) is that at deeper phylogenies the Markov models of sequence evolution must lose information exponentially at deeper divergences, and several testable methods are suggested that should help resolve these deeper divergences. For the second question (coming forwards in time) a problem is that most models for the origin of protein synthesis do not give a role for the very earliest stages of the process. From our knowledge of the importance of replication accuracy in limiting the length of a coding molecule, a testable hypothesis is proposed. The length of the code, the code itself, and tRNAs would all have prior roles in increasing the accuracy of RNA replication; thus proteins would have been formed only after the tRNAs and the length of the triplet code are already formed. Both questions lead to testable predictions. Copyright © 2014 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  19. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  20. Categorical Variables in Multiple Regression: Some Cautions.

    ERIC Educational Resources Information Center

    O'Grady, Kevin E.; Medoff, Deborah R.

    1988-01-01

    Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)

  1. Episiotomy increases perineal laceration length in primiparous women.

    PubMed

    Nager, C W; Helliwell, J P

    2001-08-01

    The aim of this study was to determine the clinical factors that contribute to posterior perineal laceration length. A prospective observational study was performed in 80 consenting, mostly primiparous women with term pregnancies. Posterior perineal lacerations were measured immediately after delivery. Numerous maternal, fetal, and operator variables were evaluated against laceration length and degree of tear. Univariate and multivariate regression analyses were performed to evaluate laceration length and parametric clinical variables. Nonparametric clinical variables were evaluated against laceration length by the Mann-Whitney U test. A multivariate stepwise linear regression equation revealed that episiotomy adds nearly 3 cm to perineal lacerations. Tear length was highly associated with the degree of tear (R = 0.86, R(2) = 0.73) and the risk of recognized anal sphincter disruption. None of 35 patients without an episiotomy had a recognized anal sphincter disruption, but 6 of 27 patients with an episiotomy did (P <.001). Body mass index was the only maternal or fetal variable that showed even a slight correlation with laceration length (R = 0.30, P =.04). Episiotomy is the overriding determinant of perineal laceration length and recognized anal sphincter disruption.

  2. Full-length genome sequences of porcine epidemic diarrhoea virus strain CV777; Use of NGS to analyse genomic and sub-genomic RNAs

    PubMed Central

    Rasmussen, Thomas Bruun; Boniotti, Maria Beatrice; Papetti, Alice; Grasland, Béatrice; Frossard, Jean-Pierre; Dastjerdi, Akbar; Hulst, Marcel; Hanke, Dennis; Pohlmann, Anne; Blome, Sandra; van der Poel, Wim H. M.; Steinbach, Falko; Blanchard, Yannick; Lavazza, Antonio; Bøtner, Anette

    2018-01-01

    Porcine epidemic diarrhoea virus, strain CV777, was initially characterized in 1978 as the causative agent of a disease first identified in the UK in 1971. This coronavirus has been widely distributed among laboratories and has been passaged both within pigs and in cell culture. To determine the variability between different stocks of the PEDV strain CV777, sequencing of the full-length genome (ca. 28kb) has been performed in 6 different laboratories, using different protocols. Not surprisingly, each of the different full genome sequences were distinct from each other and from the reference sequence (Accession number AF353511) but they are >99% identical. Unique and shared differences between sequences were identified. The coding region for the surface-exposed spike protein showed the highest proportion of variability including both point mutations and small deletions. The predicted expression of the ORF3 gene product was more dramatically affected in three different variants of this virus through either loss of the initiation codon or gain of a premature termination codon. The genome of one isolate had a substantially rearranged 5´-terminal sequence. This rearrangement was validated through the analysis of sub-genomic mRNAs from infected cells. It is clearly important to know the features of the specific sample of CV777 being used for experimental studies. PMID:29494671

  3. Motion makes sense: an adaptive motor-sensory strategy underlies the perception of object location in rats.

    PubMed

    Saraf-Sinik, Inbar; Assa, Eldad; Ahissar, Ehud

    2015-06-10

    Tactile perception is obtained by coordinated motor-sensory processes. We studied the processes underlying the perception of object location in freely moving rats. We trained rats to identify the relative location of two vertical poles placed in front of them and measured at high resolution the motor and sensory variables (19 and 2 variables, respectively) associated with this whiskers-based perceptual process. We found that the rats developed stereotypic head and whisker movements to solve this task, in a manner that can be described by several distinct behavioral phases. During two of these phases, the rats' whiskers coded object position by first temporal and then angular coding schemes. We then introduced wind (in two opposite directions) and remeasured their perceptual performance and motor-sensory variables. Our rats continued to perceive object location in a consistent manner under wind perturbations while maintaining all behavioral phases and relatively constant sensory coding. Constant sensory coding was achieved by keeping one group of motor variables (the "controlled variables") constant, despite the perturbing wind, at the cost of strongly modulating another group of motor variables (the "modulated variables"). The controlled variables included coding-relevant variables, such as head azimuth and whisker velocity. These results indicate that consistent perception of location in the rat is obtained actively, via a selective control of perception-relevant motor variables. Copyright © 2015 the authors 0270-6474/15/358777-13$15.00/0.

  4. Cost-Effective Sequencing of Full-Length cDNA Clones Powered by a De Novo-Reference Hybrid Assembly

    PubMed Central

    Sugano, Sumio; Morishita, Shinichi; Suzuki, Yutaka

    2010-01-01

    Background Sequencing full-length cDNA clones is important to determine gene structures including alternative splice forms, and provides valuable resources for experimental analyses to reveal the biological functions of coded proteins. However, previous approaches for sequencing cDNA clones were expensive or time-consuming, and therefore, a fast and efficient sequencing approach was demanded. Methodology We developed a program, MuSICA 2, that assembles millions of short (36-nucleotide) reads collected from a single flow cell lane of Illumina Genome Analyzer to shotgun-sequence ∼800 human full-length cDNA clones. MuSICA 2 performs a hybrid assembly in which an external de novo assembler is run first and the result is then improved by reference alignment of shotgun reads. We compared the MuSICA 2 assembly with 200 pooled full-length cDNA clones finished independently by the conventional primer-walking using Sanger sequencers. The exon-intron structure of the coding sequence was correct for more than 95% of the clones with coding sequence annotation when we excluded cDNA clones insufficiently represented in the shotgun library due to PCR failure (42 out of 200 clones excluded), and the nucleotide-level accuracy of coding sequences of those correct clones was over 99.99%. We also applied MuSICA 2 to full-length cDNA clones from Toxoplasma gondii, to confirm that its ability was competent even for non-human species. Conclusions The entire sequencing and shotgun assembly takes less than 1 week and the consumables cost only ∼US$3 per clone, demonstrating a significant advantage over previous approaches. PMID:20479877

  5. n-Nucleotide circular codes in graph theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Strüngmann, Lutz

    2016-03-13

    The circular code theory proposes that genes are constituted of two trinucleotide codes: the classical genetic code with 61 trinucleotides for coding the 20 amino acids (except the three stop codons {TAA,TAG,TGA}) and a circular code based on 20 trinucleotides for retrieving, maintaining and synchronizing the reading frame. It relies on two main results: the identification of a maximal C(3) self-complementary trinucleotide circular code X in genes of bacteria, eukaryotes, plasmids and viruses (Michel 2015 J. Theor. Biol. 380, 156-177. (doi:10.1016/j.jtbi.2015.04.009); Arquès & Michel 1996 J. Theor. Biol. 182, 45-58. (doi:10.1006/jtbi.1996.0142)) and the finding of X circular code motifs in tRNAs and rRNAs, in particular in the ribosome decoding centre (Michel 2012 Comput. Biol. Chem. 37, 24-37. (doi:10.1016/j.compbiolchem.2011.10.002); El Soufi & Michel 2014 Comput. Biol. Chem. 52, 9-17. (doi:10.1016/j.compbiolchem.2014.08.001)). The univerally conserved nucleotides A1492 and A1493 and the conserved nucleotide G530 are included in X circular code motifs. Recently, dinucleotide circular codes were also investigated (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631); Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)). As the genetic motifs of different lengths are ubiquitous in genes and genomes, we introduce a new approach based on graph theory to study in full generality n-nucleotide circular codes X, i.e. of length 2 (dinucleotide), 3 (trinucleotide), 4 (tetranucleotide), etc. Indeed, we prove that an n-nucleotide code X is circular if and only if the corresponding graph [Formula: see text] is acyclic. Moreover, the maximal length of a path in [Formula: see text] corresponds to the window of nucleotides in a sequence for detecting the correct reading frame. Finally, the graph theory of tournaments is applied to the study of dinucleotide circular codes. It has full equivalence between the combinatorics theory (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631)) and the group theory (Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)) of dinucleotide circular codes while its mathematical approach is simpler. © 2016 The Author(s).

  6. Data Processing Aspects of MEDLARS

    PubMed Central

    Austin, Charles J.

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287

  7. DATA PROCESSING ASPECTS OF MEDLARS.

    PubMed

    AUSTIN, C J

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.

  8. Sources of Interactional Problems in a Survey of Racial/Ethnic Discrimination

    PubMed Central

    Johnson, Timothy P.; Shariff-Marco, Salma; Willis, Gordon; Cho, Young Ik; Breen, Nancy; Gee, Gilbert C.; Krieger, Nancy; Grant, David; Alegria, Margarita; Mays, Vickie M.; Williams, David R.; Landrine, Hope; Liu, Benmei; Reeve, Bryce B.; Takeuchi, David; Ponce, Ninez A.

    2014-01-01

    Cross-cultural variability in respondent processing of survey questions may bias results from multiethnic samples. We analyzed behavior codes, which identify difficulties in the interactions of respondents and interviewers, from a discrimination module contained within a field test of the 2007 California Health Interview Survey. In all, 553 (English) telephone interviews yielded 13,999 interactions involving 22 items. Multilevel logistic regression modeling revealed that respondent age and several item characteristics (response format, customized questions, length, and first item with new response format), but not race/ethnicity, were associated with interactional problems. These findings suggest that item function within a multi-cultural, albeit English language, survey may be largely influenced by question features, as opposed to respondent characteristics such as race/ethnicity. PMID:26166949

  9. Multiple channel optical data acquisition system

    DOEpatents

    Fasching, G.E.; Goff, D.R.

    1985-02-22

    A multiple channel optical data acquisition system is provided in which a plurality of remote sensors monitoring specific process variable are interrogated by means of a single optical fiber connecting the remote station/sensors to a base station. The remote station/sensors derive all power from light transmitted through the fiber from the base station. Each station/sensor is individually accessed by means of a light modulated address code sent over the fiber. The remote station/sensors use a single light emitting diode to both send and receive light signals to communicate with the base station and provide power for the remote station. The system described can power at least 100 remote station/sensors over an optical fiber one mile in length.

  10. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  11. Enhancing the Remote Variable Operations in NPSS/CCDK

    NASA Technical Reports Server (NTRS)

    Sang, Janche; Follen, Gregory; Kim, Chan; Lopez, Isaac; Townsend, Scott

    2001-01-01

    Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase the code reusability. The remote variable scheme provided in NPSS/CCDK helps programmers easily migrate the Fortran codes towards a client-server platform. This scheme gives the client the capability of accessing the variables at the server site. In this paper, we review and enhance the remote variable scheme by using the operator overloading features in C++. The enhancement enables NPSS programmers to use remote variables in much the same way as traditional variables. The remote variable scheme adopts the lazy update approach and the prefetch method. The design strategies and implementation techniques are described in details. Preliminary performance evaluation shows that communication overhead can be greatly reduced.

  12. Social networking profile correlates of schizotypy.

    PubMed

    Martin, Elizabeth A; Bailey, Drew H; Cicero, David C; Kerns, John G

    2012-12-30

    Social networking sites, such as Facebook, are extremely popular and have become a primary method for socialization and communication. Despite a report of increased use among those on the schizophrenia-spectrum, few details are known about their actual practices. In the current research, undergraduate participants completed measures of schizotypy and personality, and provided access to their Facebook profiles. Information from the profiles were then systematically coded and compared to the questionnaire data. As predicted, social anhedonia (SocAnh) was associated with a decrease in social participation variables, including a decrease in number of friends and number of photos, and an increase in length of time since communication with a friend, but SocAnh was also associated with an increase in profile length. Also, SocAnh was highly correlated with extraversion. Relatedly, extraversion uniquely predicted the number of friends and photos and length of time since communication with a friend. In addition, perceptual aberration/magical ideation (PerMag) was associated with an increased number of "black outs" on Facebook profile print-outs, a measure of paranoia. Overall, results from this naturalistic-like study show that SocAnh and extraversion are associated with decreased social participation and PerMag with increased paranoia related to information on social networking sites. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Comparative Analysis of the Mitochondrial Genomes of Callitettixini Spittlebugs (Hemiptera: Cercopidae) Confirms the Overall High Evolutionary Speed of the AT-Rich Region but Reveals the Presence of Short Conservative Elements at the Tribal Level

    PubMed Central

    Liu, Jie; Bu, Cuiping; Wipfler, Benjamin; Liang, Aiping

    2014-01-01

    The present study compares the mitochondrial genomes of five species of the spittlebug tribe Callitettixini (Hemiptera: Cercopoidea: Cercopidae) from eastern Asia. All genomes of the five species sequenced are circular double-stranded DNA molecules and range from 15,222 to 15,637 bp in length. They contain 22 tRNA genes, 13 protein coding genes (PCGs) and 2 rRNA genes and share the putative ancestral gene arrangement of insects. The PCGs show an extreme bias of nucleotide and amino acid composition. Significant differences of the substitution rates among the different genes as well as the different codon position of each PCG are revealed by the comparative evolutionary analyses. The substitution speeds of the first and second codon position of different PCGs are negatively correlated with their GC content. Among the five species, the AT-rich region features great differences in length and pattern and generally shows a 2–5 times higher substitution rate than the fastest PCG in the mitochondrial genome, atp8. Despite the significant variability in length, short conservative segments were identified in the AT-rich region within Callitettixini, although absent from the other groups of the spittlebug superfamily Cercopoidea. PMID:25285442

  14. Termination and read-through proteins encoded by genome segment 9 of Colorado tick fever virus.

    PubMed

    Mohd Jaafar, Fauziah; Attoui, Houssam; De Micco, Philippe; De Lamballerie, Xavier

    2004-08-01

    Genome segment 9 (Seg-9) of Colorado tick fever virus (CTFV) is 1884 bp long and contains a large open reading frame (ORF; 1845 nt in length overall), although a single in-frame stop codon (at nt 1052-1054) reduces the ORF coding capacity by approximately 40 %. However, analyses of highly conserved RNA sequences in the vicinity of the stop codon indicate that it belongs to a class of 'leaky terminators'. The third nucleotide positions in codons situated both before and after the stop codon, shows the highest variability, suggesting that both regions are translated during virus replication. This also suggests that the stop signal is functionally leaky, allowing read-through translation to occur. Indeed, both the truncated 'termination' protein and the full-length 'read-through' protein (VP9 and VP9', respectively) were detected in CTFV-infected cells, in cells transfected with a plasmid expressing only Seg-9 protein products, and in the in vitro translation products from undenatured Seg-9 ssRNA. The ratios of full-length and truncated proteins generated suggest that read-through may be down-regulated by other viral proteins. Western blot analysis of infected cells and purified CTFV showed that VP9 is a structural component of the virion, while VP9' is a non-structural protein.

  15. Social networking profile correlates of schizotypy

    PubMed Central

    Martin, Elizabeth A.; Bailey, Drew H.; Cicero, David C.; Kerns, John G.

    2015-01-01

    Social networking sites, such as Facebook, are extremely popular and have become a primary method for socialization and communication. Despite a report of increased use among those on the schizophrenia-spectrum, few details are known about their actual practices. In the current research, undergraduate participants completed measures of schizotypy and personality, and provided access to their Facebook profiles. Information from the profiles were then systematically coded and compared to the questionnaire data. As predicted, social anhedonia (SocAnh) was associated with a decrease in social participation variables, including a decrease in number of friends and number of photos, and an increase in length of time since communication with a friend, but SocAnh was also associated with an increase in profile length. Also, SocAnh was highly correlated with extraversion. Relatedly, extraversion uniquely predicted the number of friends and photos and length of time since communication with a friend. In addition, perceptual aberration/magical ideation (PerMag) was associated with an increased number of “black outs” on Facebook profile print-outs, a measure of paranoia. Overall, results from this naturalistic-like study show that SocAnh and extraversion are associated with decreased social participation and PerMag with increased paranoia related to information on social networking sites. PMID:22796101

  16. Evaluation of three coding schemes designed for improved data communication

    NASA Technical Reports Server (NTRS)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  17. Ultrasonic Inspection and Fatigue Evaluation of Critical Pore Size in Welds.

    DTIC Science & Technology

    1981-09-01

    Boiler and Pressure Vessel Code ) 20...Five porosity levels were produced that parallelled ASME boiler and pressure vessel code specification (Section VIII). Appendix IV of the pressure...Figure 2 shows porosity charts (ASME Boiler and Pressure Vessel Code ) which classify and designate the number and size of pores in any six inch length

  18. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  19. Superconducting Cavity Development for Free Electron Lasers.

    DTIC Science & Technology

    1986-06-30

    effects have been modeled extensively using the code PARMELA, including finite space charge . The conflict is resolved through the use of harmonically...depends on the specifics of how the whole accelerator is run, i.e., bunch length, interpulse spacing , macrobunch length, charge per bunch, external...this indicates that the bunch length should be as long as possible. 2.4 OPTIMUM BUNCH LENGTH 20 Although wakefield, HOM excitation and space charge

  20. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  1. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  2. Industry and Occupation in the Electronic Health Record: An Investigation of the National Institute for Occupational Safety and Health Industry and Occupation Computerized Coding System.

    PubMed

    Schmitz, Matthew; Forst, Linda

    2016-02-15

    Inclusion of information about a patient's work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers' compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for "industry" and "occupation" based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. The objective of the study was to evaluate the intercoder reliability of NIOSH's Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the "high confidence" level and 49%-58% at the "medium confidence" level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are "substantial" at the 2-digit level, but only "fair" to "good" at the 4-digit level. This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted.

  3. On the existence of binary simplex codes. [using combinatorial construction

    NASA Technical Reports Server (NTRS)

    Taylor, H.

    1977-01-01

    Using a simple combinatorial construction, the existence of a binary simplex code with m codewords for all m is greater than or equal to 1 is proved. The problem of the shortest possible length is left open.

  4. Inferring the expression variability of human transposable element-derived exons by linear model analysis of deep RNA sequencing data.

    PubMed

    Zhang, Wensheng; Edwards, Andrea; Fan, Wei; Fang, Zhide; Deininger, Prescott; Zhang, Kun

    2013-08-28

    The exonization of transposable elements (TEs) has proven to be a significant mechanism for the creation of novel exons. Existing knowledge of the retention patterns of TE exons in mRNAs were mainly established by the analysis of Expressed Sequence Tag (EST) data and microarray data. This study seeks to validate and extend previous studies on the expression of TE exons by an integrative statistical analysis of high throughput RNA sequencing data. We collected 26 RNA-seq datasets spanning multiple tissues and cancer types. The exon-level digital expressions (indicating retention rates in mRNAs) were quantified by a double normalized measure, called the rescaled RPKM (Reads Per Kilobase of exon model per Million mapped reads). We analyzed the distribution profiles and the variability (across samples and between tissue/disease groups) of TE exon expressions, and compared them with those of other constitutive or cassette exons. We inferred the effects of four genomic factors, including the location, length, cognate TE family and TE nucleotide proportion (RTE, see Methods section) of a TE exon, on the exons' expression level and expression variability. We also investigated the biological implications of an assembly of highly-expressed TE exons. Our analysis confirmed prior studies from the following four aspects. First, with relatively high expression variability, most TE exons in mRNAs, especially those without exact counterparts in the UCSC RefSeq (Reference Sequence) gene tables, demonstrate low but still detectable expression levels in most tissue samples. Second, the TE exons in coding DNA sequences (CDSs) are less highly expressed than those in 3' (5') untranslated regions (UTRs). Third, the exons derived from chronologically ancient repeat elements, such as MIRs, tend to be highly expressed in comparison with those derived from younger TEs. Fourth, the previously observed negative relationship between the lengths of exons and the inclusion levels in transcripts is also true for exonized TEs. Furthermore, our study resulted in several novel findings. They include: (1) for the TE exons with non-zero expression and as shown in most of the studied biological samples, a high TE nucleotide proportion leads to their lower retention rates in mRNAs; (2) the considered genomic features (i.e. a continuous variable such as the exon length or a category indicator such as 3'UTR) influence the expression level and the expression variability (CV) of TE exons in an inverse manner; (3) not only the exons derived from Alu elements but also the exons from the TEs of other families were preferentially established in zinc finger (ZNF) genes.

  5. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  6. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on

  7. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE PAGES

    Makwana, K. D.; Zhdankin, V.; Li, H.; ...

    2015-04-10

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  8. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D.; Zhdankin, V.; Li, H.

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  9. Pediatric sports-related lower extremity fractures: hospital length of stay and charges: what is the role of the primary payer?

    PubMed

    Gao, Yubo; Johnston, Richard C; Karam, Matthew

    2010-01-01

    The purposes of this study were (a) to evaluate the distribution by primary payer (public vs. private) of U.S. pediatric patients aged 5-18 years who were hospitalized with a sports-related lower extremity fracture and (b) to discern the adjusted mean hospital length of stay and mean charge per day by payer type. Children who were aged 5 to 18 years and had diagnoses of lower extremity fracture and sports-related injury in the 2006 Healthcare Cost and Utilization Project Kids' Inpatient Database were included. Lower extremity fractures are defined as International Classification of Diseases, 9th Revision, Clinical Modification codes 820-829 under Section "Injury and Poisoning (800-999)," while sports-related external cause of injury codes (E-codes) are E886.0, E917.0, and E917.5. Differences in hospital length of stay and cost per day by payer type were assessed via adjusted least square mean analysis. The adjusted mean hospital length of stay was 20% higher for patients with a public payer (2.50 days) versus a private payer (2.08 days). The adjusted mean charge per day differed about 10% by payer type (public, US$7,900; private, US$8,794). Further research is required to identify factors that are associated with different length of stay and mean charge per day by payer type, and explore whether observed differences in hospital length of stay are the result of private payers enhancing patient care, thereby discharging patients in a more efficient manner.

  10. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  11. Self-complementary circular codes in coding theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  12. Analysis of copy number variations in Holstein-Friesian cow genomes based on whole-genome sequence data.

    PubMed

    Mielczarek, M; Frąszczak, M; Giannico, R; Minozzi, G; Williams, John L; Wojdak-Maksymiec, K; Szyda, J

    2017-07-01

    Thirty-two whole genome DNA sequences of cows were analyzed to evaluate inter-individual variability in the distribution and length of copy number variations (CNV) and to functionally annotate CNV breakpoints. The total number of deletions per individual varied between 9,731 and 15,051, whereas the number of duplications was between 1,694 and 5,187. Most of the deletions (81%) and duplications (86%) were unique to a single cow. No relation between the pattern of variant sharing and a family relationship or disease status was found. The animal-averaged length of deletions was from 5,234 to 9,145 bp and the average length of duplications was between 7,254 and 8,843 bp. Highly significant inter-individual variation in length and number of CNV was detected for both deletions and duplications. The majority of deletion and duplication breakpoints were located in intergenic regions and introns, whereas fewer were identified in noncoding transcripts and splice regions. Only 1.35 and 0.79% of the deletion and duplication breakpoints were observed within coding regions. A gene with the highest number of deletion breakpoints codes for protein kinase cGMP-dependent type I, whereas the T-cell receptor α constant gene had the most duplication breakpoints. The functional annotation of genes with the largest incidence of deletion/duplication breakpoints identified 87/112 Kyoto Encyclopedia of Genes and Genomes pathways, but none of the pathways were significantly enriched or depleted with breakpoints. The analysis of Gene Ontology (GO) terms revealed that a cluster with the highest enrichment score among genes with many deletion breakpoints was represented by GO terms related to ion transport, whereas the GO term cluster mostly enriched among the genes with many duplication breakpoints was related to binding of macromolecules. Furthermore, when considering the number of deletion breakpoints per gene functional category, no significant differences were observed between the "housekeeping" and "strong selection" categories, but genes representing the "low selection pressure" group showed a significantly higher number of breakpoints. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  14. Documentation of the GLAS fourth order general calculation model. Volume 3: Vectorized code for the Cyber 205

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 3 of a 3-volume technical memoranda which contains documentation of the GLAS fourth order genera circulation model is presented. The volume contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A dictionary of FORTRAN variables used in the Scalar Version, and listings of the FORTRAN Code compiled with the C-option, are included. Cross reference maps of local variables are included for each subroutine.

  15. Variability in interhospital trauma data coding and scoring: A challenge to the accuracy of aggregated trauma registries.

    PubMed

    Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven

    2015-09-01

    Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the validity of registry data used in all aspects of trauma care and injury surveillance.

  16. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  17. Before You Bring Back School Dress Codes, Recognize that the Courts Frown upon Attempts to "Restrict" Students' Rights.

    ERIC Educational Resources Information Center

    Sparks, Richard K.

    1983-01-01

    Courts will support school boards' dress codes if based on needs rather than opinions. Courts have affirmed that minors have constitutional rights. Hair length, clothing style, and beards may be protected by students' right to freedom of expression. Codes must be carefully written and consistent with schools' legitimate goals. (PB)

  18. The effect of word length and other sublexical, lexical, and semantic variables on developmental reading deficits.

    PubMed

    De Luca, Maria; Barca, Laura; Burani, Cristina; Zoccolotti, Pierluigi

    2008-12-01

    To examine the effect of word length and several sublexical, and lexico-semantic variables on the reading of Italian children with a developmental reading deficit. Previous studies indicated the role of word length in transparent orthographies. However, several factors that may interact with word length were not controlled for. Seventeen impaired and 34 skilled sixth-grade readers were presented words of different lengths, matched for initial phoneme, bigram frequency, word frequency, age of acquisition, and imageability. Participants were asked to read aloud, as quickly and as accurately as possible. Reaction times at the onset of pronunciation and mispronunciations were recorded. Impaired readers' reaction times indicated a marked effect of word length; in skilled readers, there was no length effect for short words but, rather, a monotonic increase from 6-letter words on. Regression analyses confirmed the role of word length and indicated the influence of word frequency (similar in impaired and skilled readers). No other variables predicted reading latencies. Word length differentially influenced word recognition in impaired versus skilled readers, irrespective of the action of (potentially interfering) sublexical, lexical, and semantic variables. It is proposed that the locus of the length effect is at a perceptual level of analysis. The independent influence of word frequency on the reading performance of both groups of participants indicates the sparing of lexical activation in impaired readers.

  19. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  20. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  1. Analysis of construction accidents in Spain, 2003-2008.

    PubMed

    López Arquillos, Antonio; Rubio Romero, Juan Carlos; Gibb, Alistair

    2012-12-01

    The research objective for this paper is to obtain a new extended and updated insight to the likely causes of construction accidents in Spain, in order to identify suitable mitigating actions. The paper analyzes all construction sector accidents in Spain between 2003 and 2008. Ten variables were chosen and the influence of each variable is evaluated with respect to the severity of the accident. The descriptive analysis is based on a total of 1,163,178 accidents. Results showed that the severity of accidents was related to variables including age, CNAE (National Classification of Economic Activities) code, size of company, length of service, location of accident, day of the week, days of absence, deviation, injury, and climatic zones. According to data analyzed, a large company is not always necessarily safer than a small company in the aspect of fatal accidents, experienced workers do not have the best accident fatality rates, and accidents occurring away from the usual workplace had more severe consequences. Results obtained in this paper can be used by companies in their occupational safety strategies, and in their safety training programs. Copyright © 2012 National Safety Council and Elsevier Ltd. All rights reserved.

  2. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  3. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  4. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  5. Flexible high speed codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.

    1992-01-01

    The project's objective is to develop an advanced high speed coding technology that provides substantial coding gains with limited bandwidth expansion for several common modulation types. The resulting technique is applicable to several continuous and burst communication environments. Decoding provides a significant gain with hard decisions alone and can utilize soft decision information when available from the demodulator to increase the coding gain. The hard decision codec will be implemented using a single application specific integrated circuit (ASIC) chip. It will be capable of coding and decoding as well as some formatting and synchronization functions at data rates up to 300 megabits per second (Mb/s). Code rate is a function of the block length and can vary from 7/8 to 15/16. Length of coded bursts can be any multiple of 32 that is greater than or equal to 256 bits. Coding may be switched in or out on a burst by burst basis with no change in the throughput delay. Reliability information in the form of 3-bit (8-level) soft decisions, can be exploited using applique circuitry around the hard decision codec. This applique circuitry will be discrete logic in the present contract. However, ease of transition to LSI is one of the design guidelines. Discussed here is the selected coding technique. Its application to some communication systems is described. Performance with 4, 8, and 16-ary Phase Shift Keying (PSK) modulation is also presented.

  6. Gait variability in community dwelling adults with Alzheimer disease.

    PubMed

    Webster, Kate E; Merory, John R; Wittwer, Joanne E

    2006-01-01

    Studies have shown that measures of gait variability are associated with falling in older adults. However, few studies have measured gait variability in people with Alzheimer disease, despite the high incidence of falls in Alzheimer disease. The purpose of this study was to compare gait variability of community-dwelling older adults with Alzheimer disease and control subjects at various walking speeds. Ten subjects with mild-moderate Alzheimer disease and ten matched control subjects underwent gait analysis using an electronic walkway. Participants were required to walk at self-selected slow, preferred, and fast speeds. Stride length and step width variability were determined using the coefficient of variation. Results showed that stride length variability was significantly greater in the Alzheimer disease group compared with the control group at all speeds. In both groups, increases in walking speed were significantly correlated with decreases in stride length variability. Step width variability was significantly reduced in the Alzheimer disease group compared with the control group at slow speed only. In conclusion, there is an increase in stride length variability in Alzheimer disease at all walking speeds that may contribute to the increased incidence of falls in Alzheimer disease.

  7. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  8. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  9. Coherent Somatic Mutation in Autoimmune Disease

    PubMed Central

    Ross, Kenneth Andrew

    2014-01-01

    Background Many aspects of autoimmune disease are not well understood, including the specificities of autoimmune targets, and patterns of co-morbidity and cross-heritability across diseases. Prior work has provided evidence that somatic mutation caused by gene conversion and deletion at segmentally duplicated loci is relevant to several diseases. Simple tandem repeat (STR) sequence is highly mutable, both somatically and in the germ-line, and somatic STR mutations are observed under inflammation. Results Protein-coding genes spanning STRs having markers of mutability, including germ-line variability, high total length, repeat count and/or repeat similarity, are evaluated in the context of autoimmunity. For the initiation of autoimmune disease, antigens whose autoantibodies are the first observed in a disease, termed primary autoantigens, are informative. Three primary autoantigens, thyroid peroxidase (TPO), phogrin (PTPRN2) and filaggrin (FLG), include STRs that are among the eleven longest STRs spanned by protein-coding genes. This association of primary autoantigens with long STR sequence is highly significant (). Long STRs occur within twenty genes that are associated with sixteen common autoimmune diseases and atherosclerosis. The repeat within the TTC34 gene is an outlier in terms of length and a link with systemic lupus erythematosus is proposed. Conclusions The results support the hypothesis that many autoimmune diseases are triggered by immune responses to proteins whose DNA sequence mutates somatically in a coherent, consistent fashion. Other autoimmune diseases may be caused by coherent somatic mutations in immune cells. The coherent somatic mutation hypothesis has the potential to be a comprehensive explanation for the initiation of many autoimmune diseases. PMID:24988487

  10. Industry and Occupation in the Electronic Health Record: An Investigation of the National Institute for Occupational Safety and Health Industry and Occupation Computerized Coding System

    PubMed Central

    2016-01-01

    Background Inclusion of information about a patient’s work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers’ compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for “industry” and “occupation” based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. Objective The objective of the study was to evaluate the intercoder reliability of NIOSH’s Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Methods Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Results Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the “high confidence” level and 49%-58% at the “medium confidence” level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are “substantial” at the 2-digit level, but only “fair” to “good” at the 4-digit level. Conclusions This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted. PMID:26878932

  11. Context dependent prediction and category encoding for DPCM image compression

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.

    1989-01-01

    Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.

  12. Numerical model for learning concepts of streamflow simulation

    USGS Publications Warehouse

    DeLong, L.L.; ,

    1993-01-01

    Numerical models are useful for demonstrating principles of open-channel flow. Such models can allow experimentation with cause-and-effect relations, testing concepts of physics and numerical techniques. Four PT is a numerical model written primarily as a teaching supplement for a course in one-dimensional stream-flow modeling. Four PT options particularly useful in training include selection of governing equations, boundary-value perturbation, and user-programmable constraint equations. The model can simulate non-trivial concepts such as flow in complex interconnected channel networks, meandering channels with variable effective flow lengths, hydraulic structures defined by unique three-parameter relations, and density-driven flow.The model is coded in FORTRAN 77, and data encapsulation is used extensively to simplify maintenance and modification and to enhance the use of Four PT modules by other programs and programmers.

  13. Novel methodologies for spectral classification of exon and intron sequences

    NASA Astrophysics Data System (ADS)

    Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.

    2012-12-01

    Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.

  14. Coded spread spectrum digital transmission system design study

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.

    1974-01-01

    Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.

  15. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  16. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.

  17. The complete mitochondrial genome of Chrysopa pallens (Insecta, Neuroptera, Chrysopidae).

    PubMed

    He, Kun; Chen, Zhe; Yu, Dan-Na; Zhang, Jia-Yong

    2012-10-01

    The complete mitochondrial genome of Chrysopa pallens (Neuroptera, Chrysopidae) was sequenced. It consists of 13 protein-coding genes, 22 transfer RNA genes, 2 ribosomal RNA (rRNA) genes, and a control region (AT-rich region). The total length of C. pallens mitogenome is 16,723 bp with 79.5% AT content, and the length of control region is 1905 bp with 89.1% AT content. The non-coding regions of C. pallens include control region between 12S rRNA and trnI genes, and a 75-bp space region between trnI and trnQ genes.

  18. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  19. A homozygous mutation in the endothelin-3 gene associated with a combined Waardenburg type 2 and Hirschsprung phenotype (Shah-Waardenburg syndrome).

    PubMed

    Hofstra, R M; Osinga, J; Tan-Sindhunata, G; Wu, Y; Kamsteeg, E J; Stulp, R P; van Ravenswaaij-Arts, C; Majoor-Krakauer, D; Angrist, M; Chakravarti, A; Meijers, C; Buys, C H

    1996-04-01

    Hirschsprung disease (HSCR) or colonic aganglionosis is a congenital disorder characterized by an absence of intramural ganglia along variable lengths of the colon resulting in intestinal obstruction. The incidence of HSCR is 1 in 5,000 live births. Mutations in the RET gene, which codes for a receptor tyrosine kinase, and in EDNRB which codes for the endothelin-B receptor, have been shown to be associated with HSCR in humans. The lethal-spotted mouse which has pigment abnormalities, but also colonic aganglionosis, carries a mutation in the gene coding for endothelin 3 (Edn3), the ligand for the receptor protein encoded by EDNRB. Here, we describe a mutation of the human gene for endothelin 3 (EDN3), homozygously present in a patient with a combined Waardenburg syndrome type 2 (WS2) and HSCR phenotype (Shah-Waardenburg syndrome). The mutation, Cys159Phe, in exon 3 in the ET-3 like domain of EDN3, presumably affects the proteolytic processing of the preproendothelin to the mature peptide EDN3. The patient's parents were first cousins. A previous child in this family had been diagnosed with a similar combination of HSCR, depigmentation and deafness. Depigmentation and deafness were present in other relatives. Moreover, we present a further indication for the involvement of EDNRB in HSCR by reporting a novel mutation detected in one of 40 unselected HSCR patients.

  20. The complete chloroplast genome of Cinnamomum camphora and its comparison with related Lauraceae species.

    PubMed

    Chen, Caihui; Zheng, Yongjie; Liu, Sian; Zhong, Yongda; Wu, Yanfang; Li, Jiang; Xu, Li-An; Xu, Meng

    2017-01-01

    Cinnamomum camphora , a member of the Lauraceae family, is a valuable aromatic and timber tree that is indigenous to the south of China and Japan. All parts of Cinnamomum camphora have secretory cells containing different volatile chemical compounds that are utilized as herbal medicines and essential oils. Here, we reported the complete sequencing of the chloroplast genome of Cinnamomum camphora using illumina technology. The chloroplast genome of Cinnamomum camphora is 152,570 bp in length and characterized by a relatively conserved quadripartite structure containing a large single copy region of 93,705 bp, a small single copy region of 19,093 bp and two inverted repeat (IR) regions of 19,886 bp. Overall, the genome contained 123 coding regions, of which 15 were repeated in the IR regions. An analysis of chloroplast sequence divergence revealed that the small single copy region was highly variable among the different genera in the Lauraceae family. A total of 40 repeat structures and 83 simple sequence repeats were detected in both the coding and non-coding regions. A phylogenetic analysis indicated that Calycanthus is most closely related to Lauraceae , both being members of Laurales , which forms a sister group to Magnoliids . The complete sequence of the chloroplast of Cinnamomum camphora will aid in in-depth taxonomical studies of the Lauraceae family in the future. The genetic sequence information will also have valuable applications for chloroplast genetic engineering.

  1. Wild-Type Measles Viruses with Non-Standard Genome Lengths

    PubMed Central

    Bankamp, Bettina; Liu, Chunyu; Rivailler, Pierre; Bera, Jayati; Shrivastava, Susmita; Kirkness, Ewen F.; Bellini, William J.; Rota, Paul A.

    2014-01-01

    The length of the single stranded, negative sense RNA genome of measles virus (MeV) is highly conserved at 15,894 nucleotides (nt). MeVs can be grouped into 24 genotypes based on the highly variable 450 nucleotides coding for the carboxyl-terminus of the nucleocapsid protein (N-450). Here, we report the genomic sequences of 2 wild-type viral isolates of genotype D4 with genome lengths of 15,900 nt. Both genomes had a 7 nt insertion in the 3′ untranslated region (UTR) of the matrix (M) gene and a 1 nt deletion in the 5′ UTR of the fusion (F) gene. The net gain of 6 nt complies with the rule-of-six required for replication competency of the genomes of morbilliviruses. The insertions and deletion (indels) were confirmed in a patient sample that was the source of one of the viral isolates. The positions of the indels were identical in both viral isolates, even though epidemiological data and the 3 nt differences in N-450 between the two genomes suggested that the viruses represented separate chains of transmission. Identical indels were found in the M-F intergenic regions of 14 additional genotype D4 viral isolates that were imported into the US during 2007–2010. Viral isolates with and without indels produced plaques of similar size and replicated efficiently in A549/hSLAM and Vero/hSLAM cells. This is the first report of wild-type MeVs with genome lengths other than 15,894 nt and demonstrates that the length of the M-F UTR of wild-type MeVs is flexible. PMID:24748123

  2. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization study is carried out using a geometric mean approach. Following this, sensitivity analyses with the aid of variance-based non-parametric approach and partial correlation coefficients are conducted using data available from surrogate models of the objectives and the multi-objective optima to identify the contribution of the design variables to the objective variability and to analyze the variability of the design variables and the objectives. In summary the present dissertation offers insight into an improved coarse to fine grid extrapolation technique for Navier-Stokes computations and also suggests tools for a designer to conduct design optimization study and related sensitivity analyses for a given design problem.

  3. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  4. New optimal asymmetric quantum codes constructed from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong

    2017-02-01

    In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t - 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx

  5. Bilingual Dual Coding in Japanese Returnee Students.

    ERIC Educational Resources Information Center

    Taura, Hideyuki

    1998-01-01

    Investigates effects of second-language acquisition age, length of exposure to the second language, and Japanese language specificity on the bilingual dual coding hypothesis proposed by Paivio and Desrochers (1980). Balanced Japanese-English bilingual returnee (having resided in an English-speaking country) subjects were presented with pictures to…

  6. Evaluation of Spanwise Variable Impedance Liners with Three-Dimensional Aeroacoustics Propagation Codes

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Schiller, N. H.

    2017-01-01

    Three perforate-over-honeycomb liner configurations, one uniform and two with spanwise variable impedance, are evaluated based on tests conducted in the NASA Grazing Flow Impedance Tube (GFIT) with a plane-wave source. Although the GFIT is only 2" wide, spanwise impedance variability clearly affects the measured acoustic pressure field, such that three-dimensional (3D) propagation codes are required to properly predict this acoustic pressure field. Three 3D propagation codes (CHE3D, COMSOL, and CDL) are used to predict the sound pressure level and phase at eighty-seven microphones flush-mounted in the GFIT (distributed along all four walls). The CHE3D and COMSOL codes compare favorably with the measured data, regardless of whether an exit acoustic pressure or anechoic boundary condition is employed. Except for those frequencies where the attenuation is large, the CDL code also provides acceptable estimates of the measured acoustic pressure profile. The CHE3D and COMSOL predictions diverge slightly from the measured data for frequencies away from resonance, where the attenuation is noticeably reduced, particularly when an exit acoustic pressure boundary condition is used. For these conditions, the CDL code actually provides slightly more favorable comparison with the measured data. Overall, the comparisons of predicted and measured data suggest that any of these codes can be used to understand data trends associated with spanwise variable-impedance liners.

  7. Computation of Sound Generated by Flow Over a Circular Cylinder: An Acoustic Analogy Approach

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Cox, Jared S.; Rumsey, Christopher L.; Younis, Bassam A.

    1997-01-01

    The sound generated by viscous flow past a circular cylinder is predicted via the Lighthill acoustic analogy approach. The two dimensional flow field is predicted using two unsteady Reynolds-averaged Navier-Stokes solvers. Flow field computations are made for laminar flow at three Reynolds numbers (Re = 1000, Re = 10,000, and Re = 90,000) and two different turbulent models at Re = 90,000. The unsteady surface pressures are utilized by an acoustics code that implements Farassat's formulation 1A to predict the acoustic field. The acoustic code is a 3-D code - 2-D results are found by using a long cylinder length. The 2-D predictions overpredict the acoustic amplitude; however, if correlation lengths in the range of 3 to 10 cylinder diameters are used, the predicted acoustic amplitude agrees well with experiment.

  8. SEISRISK II; a computer program for seismic hazard estimation

    USGS Publications Warehouse

    Bender, Bernice; Perkins, D.M.

    1982-01-01

    The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.

  9. Determinants of Major League Baseball Pitchers' Career Length.

    PubMed

    Hardy, Rich; Ajibewa, Tiwaloluwa; Bowman, Ray; Brand, Jefferson C

    2017-02-01

    To investigate variables (injury, position, performance, and pitching volume) that affect the career longevity of Major League Baseball pitchers. To be eligible, pitchers must have entered Major League Baseball between 1989 and 1992 without missing information for the variables on the website http://www.baseball-reference.com. The variables assessed were average innings pitched per year before and after age 25 years, earned run average, walks and hits divided by innings pitched, strikeout to walk ratio, pitching position, time on the disabled list, length of career, and starting and retirement age. We used analysis of variance to compare the differences between groups and a regression model to assess the relationship between variables before age 25 years and career length. Mean retirement age for the group was 31.74 (95% confidence interval 30.83-32.65) and mean career length was 10.97 (95% confidence interval, 10.02-11.92) years. Innings pitched after age 25 years increased slightly, but not significantly, from the number of innings pitched before age 25 years, 85.35 versus 74.25, P = .5063. Career earned run average was not significantly different after age 25 years compared with before age 25 years, 4.83 versus 5.58, respectively, P = .8834. Both strikeout to walk ratio, 1.55 to 1.77, P = .0022, and walks and hits divided by innings pitched, 1.63 to 1.50, P = .0339, improved significantly after age 25 years compared with before age 25 years. The position the player started and ended his career (starter or reliever) did not influence career length. Multiple regression analysis comparing the variables from before age 25 revealed only the number of innings pitched before age 25 were positively related to career length, R 2  = 0.1408, P < .0001. All other variables analyzed before age 25 years were not significantly related to career length. The only studied variable that had significant relationship, which was weak to low, with career length was innings pitched per year before age 25 years. All other variables analyzed before age 25 years were not significantly related to career length. Level IV, case series. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  10. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  11. The Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2013-01-01

    Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…

  12. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  13. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  14. Trace-shortened Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Solomon, G.

    1994-01-01

    Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.

  15. New quantum codes derived from a family of antiprimitive BCH codes

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  16. Patterns in the species-environment relationship depend on both scale and choice of response variables

    Treesearch

    Samuel A. Cushman; Kevin McGarigal

    2004-01-01

    Multi-scale investigations of species/environment relationships are an important tool in ecological research. The scale at which independent and dependent variables are measured, and how they are coded for analysis, can strongly influence the relationships that are discovered. However, little is known about how the coding of the dependent variable set influences...

  17. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  18. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  19. Are Neighborhood-Level Characteristics Associated with Indoor Allergens in the Household?

    PubMed Central

    Rosenfeld, Lindsay; Rudd, Rima; Chew, Ginger L.; Emmons, Karen; Acevedo-García, Dolores

    2010-01-01

    Background Individual home characteristics have been associated with indoor allergen exposure; however, the influence of neighborhood-level characteristics has not been well-studied. We defined neighborhoods as community districts determined by the New York Department of City Planning. Objective We examined the relationship between neighborhood-level characteristics and the presence of dust mite (Der f 1), cat (Fel d 1), cockroach (Bla g 2), and mouse (MUP) allergens in the household. Methods Using data from the Puerto Rican Asthma Project, a birth cohort of Puerto Rican children at risk of allergic sensitization (n=261) we examined associations between neighborhood characteristics (percent tree canopy, asthma hospitalizations per 1000 children, roadway length within 100 meters of buildings, serious housing code violations per 1000 rental units, poverty rates, and felony crime rates) and the presence of indoor allergens. Allergen cutpoints were used for categorical analyses and defined as follows: dust mite: >0.25 μg/g; cat: >1 μg/g; cockroach: >1 U/g; mouse: >1.6 μg/g. Results Serious housing code violations were statistically significantly positively associated with dust mite, cat and mouse allergens (continuous variables), adjusting for mother's income and education, and all neighborhood-level characteristics. In multivariable logistic regression analyses, medium levels of housing code violations were associated with higher dust mite and cat allergens (1.81, 95%CI: 1.08, 3.03 and 3.10, 95%CI: 1.22, 7.92, respectively). A high level of serious housing code violations was associated with higher mouse allergen (2.04, 95%CI: 1.15, 3.62). A medium level of housing code violations was associated with higher cockroach allergen (3.30, 95%CI: 1.11, 9.78). Conclusions Neighborhood-level characteristics, specifically housing code violations, appear to be related to indoor allergens, which may have implications for future research explorations and policy decisions. PMID:20100024

  20. Coding completeness and quality of relative survival-related variables in the National Program of Cancer Registries Cancer Surveillance System, 1995-2008.

    PubMed

    Wilson, Reda J; O'Neil, M E; Ntekop, E; Zhang, Kevin; Ren, Y

    2014-01-01

    Calculating accurate estimates of cancer survival is important for various analyses of cancer patient care and prognosis. Current US survival rates are estimated based on data from the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End RESULTS (SEER) program, covering approximately 28 percent of the US population. The National Program of Cancer Registries (NPCR) covers about 96 percent of the US population. Using a population-based database with greater US population coverage to calculate survival rates at the national, state, and regional levels can further enhance the effective monitoring of cancer patient care and prognosis in the United States. The first step is to establish the coding completeness and coding quality of the NPCR data needed for calculating survival rates and conducting related validation analyses. Using data from the NPCR-Cancer Surveillance System (CSS) from 1995 through 2008, we assessed coding completeness and quality on 26 data elements that are needed to calculate cancer relative survival estimates and conduct related analyses. Data elements evaluated consisted of demographic, follow-up, prognostic, and cancer identification variables. Analyses were performed showing trends of these variables by diagnostic year, state of residence at diagnosis, and cancer site. Mean overall percent coding completeness by each NPCR central cancer registry averaged across all data elements and diagnosis years ranged from 92.3 percent to 100 percent. RESULTS showing the mean percent coding completeness for the relative survival-related variables in NPCR data are presented. All data elements but 1 have a mean coding completeness greater than 90 percent as was the mean completeness by data item group type. Statistically significant differences in coding completeness were found in the ICD revision number, cause of death, vital status, and date of last contact variables when comparing diagnosis years. The majority of data items had a coding quality greater than 90 percent, with exceptions found in cause of death, follow-up source, and the SEER Summary Stage 1977, and SEER Summary Stage 2000. Percent coding completeness and quality are very high for variables in the NPCR-CSS that are covariates to calculating relative survival. NPCR provides the opportunity to calculate relative survival that may be more generalizable to the US population.

  1. A variable mixing-length ratio for convection theory

    NASA Technical Reports Server (NTRS)

    Chan, K. L.; Wolff, C. L.; Sofia, S.

    1981-01-01

    It is argued that a natural choice for the local mixing length in the mixing-length theory of convection has a value proportional to the local density scale height of the convective bubbles. The resultant variable mixing-length ratio (the ratio between the mixing length and the pressure scale height) of this theory is enhanced in the superadiabatic region and approaches a constant in deeper layers. Numerical tests comparing the new mixing length successfully eliminate most of the density inversion that typically plagues conventional results. The new approach also seems to indicate the existence of granular motion at the top of the convection zone.

  2. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  3. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that are modulated by the brain chemical dopamine are sensitive to reward variability. Here, we aimed to directly relate dopamine to learning about variable rewards, and the neural encoding of associated teaching signals. We perturbed dopamine in healthy individuals using dopaminergic medication and asked them to predict variable rewards while we made brain scans. Dopamine perturbations impaired learning and the neural encoding of reward variability, thus establishing a direct link between dopamine and adaptation to reward variability. These results aid our understanding of clinical conditions associated with dopaminergic dysfunction, such as psychosis. Copyright © 2017 Diederen et al.

  4. CRIB; the mineral resources data bank of the U.S. Geological Survey

    USGS Publications Warehouse

    Calkins, James Alfred; Kays, Olaf; Keefer, Eleanor K.

    1973-01-01

    The recently established Computerized Resources Information Bank (CRIB) of the U.S. Geological Survey is expected to play an increasingly important role in the study of United States' mineral resources. CRIB provides a rapid means for organizing and summarizing information on mineral resources and for displaying the results. CRIB consists of a set of variable-length records containing the basic information needed to characterize one or more mineral commodities, a mineral deposit, or several related deposits. The information consists of text, numeric data, and codes. Some topics covered are: name, location, commodity information, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which performs all the processing tasks needed to build, operate, and maintain the CRIB file. The sophisticated retrieval program allows the user to make highly selective searches of the files for words, parts of words, phrases, numeric data, word ranges, numeric ranges, and others, and to interrelate variables by logic statements to any degree of refinement desired. Three print options are available, or the retrieved data can be passed to another program for further processing.

  5. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  6. OPTIMASS: a package for the minimization of kinematic mass functions with constraints

    NASA Astrophysics Data System (ADS)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Lim, Sung Hak; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2016-01-01

    Reconstructed mass variables, such as M 2, M 2 C , M T * , and M T2 W , play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, O ptimass, which interfaces with the M inuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. We describe this code, explain its physics motivation, and demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.

  7. Experimental Study of an Axisymmetric Dual Throat Fluidic Thrust Vectoring Nozzle for Supersonic Aircraft Application

    NASA Technical Reports Server (NTRS)

    Flamm, Jeffrey D.; Deere, Karen A.; Mason, Mary L.; Berrier, Bobby L.; Johnson, Stuart K.

    2007-01-01

    An axisymmetric version of the Dual Throat Nozzle concept with a variable expansion ratio has been studied to determine the impacts on thrust vectoring and nozzle performance. The nozzle design, applicable to a supersonic aircraft, was guided using the unsteady Reynolds-averaged Navier-Stokes computational fluid dynamics code, PAB3D. The axisymmetric Dual Throat Nozzle concept was tested statically in the Jet Exit Test Facility at the NASA Langley Research Center. The nozzle geometric design variables included circumferential span of injection, cavity length, cavity convergence angle, and nozzle expansion ratio for conditions corresponding to take-off and landing, mid climb and cruise. Internal nozzle performance and thrust vectoring performance was determined for nozzle pressure ratios up to 10 with secondary injection rates up to 10 percent of the primary flow rate. The 60 degree span of injection generally performed better than the 90 degree span of injection using an equivalent injection area and number of holes, in agreement with computational results. For injection rates less than 7 percent, thrust vector angle for the 60 degree span of injection was 1.5 to 2 degrees higher than the 90 degree span of injection. Decreasing cavity length improved thrust ratio and discharge coefficient, but decreased thrust vector angle and thrust vectoring efficiency. Increasing cavity convergence angle from 20 to 30 degrees increased thrust vector angle by 1 degree over the range of injection rates tested, but adversely affected system thrust ratio and discharge coefficient. The dual throat nozzle concept generated the best thrust vectoring performance with an expansion ratio of 1.0 (a cavity in between two equal minimum areas). The variable expansion ratio geometry did not provide the expected improvements in discharge coefficient and system thrust ratio throughout the flight envelope of typical a supersonic aircraft. At mid-climb and cruise conditions, the variable geometry design compromised thrust vector angle achieved, but some thrust vector control would be available, potentially for aircraft trim. The fixed area, expansion ratio of 1.0, Dual Throat Nozzle provided the best overall compromise for thrust vectoring and nozzle internal performance over the range of NPR tested compared to the variable geometry Dual Throat Nozzle.

  8. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  9. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  10. Spin wave based parallel logic operations for binary data coded with domain walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urazuka, Y.; Oyabu, S.; Chen, H.

    2014-05-07

    We numerically investigate the feasibility of spin wave (SW) based parallel logic operations, where the phase of SW packet (SWP) is exploited as a state variable and the phase shift caused by the interaction with domain wall (DW) is utilized as a logic inversion functionality. A designed functional element consists of parallel ferromagnetic nanowires (6 nm-thick, 36 nm-width, 5120 nm-length, and 200 nm separation) with the perpendicular magnetization and sub-μm scale overlaid conductors. The logic outputs for binary data, coded with the existence (“1”) or absence (“0”) of the DW, are inductively read out from interferometric aspect of the superposed SWPs, one of themmore » propagating through the stored data area. A practical exclusive-or operation, based on 2π periodicity in the phase logic, is demonstrated for the individual nanowire with an order of different output voltage V{sub out}, depending on the logic output for the stored data. The inductive output from the two nanowires exhibits well defined three different signal levels, corresponding to the information distance (Hamming distance) between 2-bit data stored in the multiple nanowires.« less

  11. Sequence of a cDNA encoding pancreatic preprosomatostatin-22.

    PubMed Central

    Magazin, M; Minth, C D; Funckes, C L; Deschenes, R; Tavianini, M A; Dixon, J E

    1982-01-01

    We report the nucleotide sequence of a precursor to somatostatin that upon proteolytic processing may give rise to a hormone of 22 amino acids. The nucleotide sequence of a cDNA from the channel catfish (Ictalurus punctatus) encodes a precursor to somatostatin that is 105 amino acids (Mr, 11,500). The cDNA coding for somatostatin-22 consists of 36 nucleotides in the 5' untranslated region, 315 nucleotides that code for the precursor to somatostatin-22, 269 nucleotides at the 3' untranslated region, and a variable length of poly(A). The putative preprohormone contains a sequence of hydrophobic amino acids at the amino terminus that has the properties of a "signal" peptide. A connecting sequence of approximately 57 amino acids is followed by a single Arg-Arg sequence, which immediately precedes the hormone. Somatostatin-22 is homologous to somatostatin-14 in 7 of the 14 amino acids, including the Phe-Trp-Lys sequence. Hybridization selection of mRNA, followed by its translation in a wheat germ cell-free system, resulted in the synthesis of a single polypeptide having a molecular weight of approximately 10,000 as estimated on Na-DodSO4/polyacrylamide gels. Images PMID:6127673

  12. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    NASA Astrophysics Data System (ADS)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  13. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  14. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)

    DTIC Science & Technology

    2014-10-01

    SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test

  15. Trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.

    2000-09-01

    The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  16. One-step trinary signed-digit arithmetic using an efficient encoding scheme

    NASA Astrophysics Data System (ADS)

    Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.

    2000-11-01

    The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.

  17. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  18. Power Aware Signal Processing Environment (PASPE) for PAC/C

    DTIC Science & Technology

    2003-02-01

    vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum

  19. An integrated PCR colony hybridization approach to screen cDNA libraries for full-length coding sequences.

    PubMed

    Pollier, Jacob; González-Guzmán, Miguel; Ardiles-Diaz, Wilson; Geelen, Danny; Goossens, Alain

    2011-01-01

    cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) is a commonly used technique for genome-wide expression analysis that does not require prior sequence knowledge. Typically, quantitative expression data and sequence information are obtained for a large number of differentially expressed gene tags. However, most of the gene tags do not correspond to full-length (FL) coding sequences, which is a prerequisite for subsequent functional analysis. A medium-throughput screening strategy, based on integration of polymerase chain reaction (PCR) and colony hybridization, was developed that allows in parallel screening of a cDNA library for FL clones corresponding to incomplete cDNAs. The method was applied to screen for the FL open reading frames of a selection of 163 cDNA-AFLP tags from three different medicinal plants, leading to the identification of 109 (67%) FL clones. Furthermore, the protocol allows for the use of multiple probes in a single hybridization event, thus significantly increasing the throughput when screening for rare transcripts. The presented strategy offers an efficient method for the conversion of incomplete expressed sequence tags (ESTs), such as cDNA-AFLP tags, to FL-coding sequences.

  20. Action-oriented colour-coded foot length calliper for primary healthcare workers as a proxy for birth weight & gestational period.

    PubMed

    Pratinidhi, Asha K; Bagade, Abhijit C; Kakade, Satish V; Kale, Hemangi P; Kshirsagar, Vinayak Y; Babar, Rohini; Bagal, Shilpa

    2017-03-01

    Foot length of the newborn has a good correlation with the birth weight and is recommended to be used as a proxy measure. There can be variations in the measurement of foot length. A study was, therefore, carried out to develop a foot length calliper for accurate foot length measurement and to find cut-off values for birth weight and gestational age groups to be used by primary healthcare workers. This study was undertaken on 645 apparently healthy newborn infants with known gestational age. Nude birth weight was taken within 24 h of birth on a standard electronic weighing machine. A foot length calliper was developed. Correlation between foot length and birth weight as well as gestational age was calculated. Correctness of cut-off values was tested using another set of 133 observations on the apparently healthy newborns. Action-oriented colour coding was done to make it easy for primary healthcare workers to use it. There was a significant correlation of foot length with birth weight (r=0.75) and gestational age (r=0.63). Cut-off values for birth weight groups were 6.1, 6.8 and 7.3 cm and for gestational age of 6.1, 6.8 and 7.0 cm. Correctness of these cut-off values ranged between 77.1 and 95.7 per cent for birth weight and 60-93.3 per cent for gestational age. Considering 2.5 kg as cut-off between normal birth weight and low birth weight (LBW), cut-off values of 6.1, 6.8 and 7.3 were chosen. Action-oriented colour coding was done by superimposing the colours on the scale of the calliper, green indicating home care, yellow indicating supervised home care, orange indicating care at newborn care units at primary health centres and red indicating Neonatal Intensive Care Unit care for infants. A simple device was developed so that the primary health care workers and trained Accredited Social Health Activist workers can identify the risk of LBW in the absence of accurate weighing facilities and decide on the type of care needed by the newborn and take action accordingly.

  1. Relationships between Translation and Transcription Processes during fMRI Connectivity Scanning and Coded Translation and Transcription in Writing Products after Scanning in Children with and without Transcription Disabilities

    PubMed Central

    Wallis, Peter; Richards, Todd; Boord, Peter; Abbott, Robert; Berninger, Virginia

    2018-01-01

    Students with transcription disabilities (dysgraphia/impaired handwriting, n = 13 or dyslexia/impaired word spelling, n = 16) or without transcription disabilities (controls) completed transcription and translation (idea generating, planning, and creating) writing tasks during fMRI connectivity scanning and compositions after scanning, which were coded for transcription and translation variables. Compositions in both groups showed diversity in genre beyond usual narrative-expository distinction; groups differed in coded transcription but not translation variables. For the control group specific transcription or translation tasks during scanning correlated with corresponding coded transcription or translation skills in composition, but connectivity during scanning was not correlated with coded handwriting during composing in dysgraphia group and connectivity during translating was not correlated with any coded variable during composing in dyslexia group. Results are discussed in reference to the trend in neuroscience to use connectivity from relevant seed points while performing tasks and trends in education to recognize the generativity (creativity) of composing at both the genre and syntax levels. PMID:29600113

  2. Variability in the Length and Frequency of Steps of Sighted and Visually Impaired Walkers

    ERIC Educational Resources Information Center

    Mason, Sarah J.; Legge, Gordon E.; Kallie, Christopher S.

    2005-01-01

    The variability of the length and frequency of steps was measured in sighted and visually impaired walkers at three different paces. The variability was low, especially at the preferred pace, and similar for both groups. A model incorporating step counts and step frequency provides good estimates of the distance traveled. Applications to…

  3. The complete mitochondrial genome and phylogenetic analysis of the giant panda (Ailuropoda melanoleuca).

    PubMed

    Peng, Rui; Zeng, Bo; Meng, Xiuxiang; Yue, Bisong; Zhang, Zhihe; Zou, Fangdong

    2007-08-01

    The complete mitochondrial genome sequence of the giant panda, Ailuropoda melanoleuca, was determined by the long and accurate polymerase chain reaction (LA-PCR) with conserved primers and primer walking sequence methods. The complete mitochondrial DNA is 16,805 nucleotides in length and contains two ribosomal RNA genes, 13 protein-coding genes, 22 transfer RNA genes and one control region. The total length of the 13 protein-coding genes is longer than the American black bear, brown bear and polar bear by 3 amino acids at the end of ND5 gene. The codon usage also followed the typical vertebrate pattern except for an unusual ATT start codon, which initiates the NADH dehydrogenase subunit 5 (ND5) gene. The molecular phylogenetic analysis was performed on the sequences of 12 concatenated heavy-strand encoded protein-coding genes, and suggested that the giant panda is most closely related to bears.

  4. Variable focal length deformable mirror

    DOEpatents

    Headley, Daniel [Albuquerque, NM; Ramsey, Marc [Albuquerque, NM; Schwarz, Jens [Albuquerque, NM

    2007-06-12

    A variable focal length deformable mirror has an inner ring and an outer ring that simply support and push axially on opposite sides of a mirror plate. The resulting variable clamping force deforms the mirror plate to provide a parabolic mirror shape. The rings are parallel planar sections of a single paraboloid and can provide an on-axis focus, if the rings are circular, or an off-axis focus, if the rings are elliptical. The focal length of the deformable mirror can be varied by changing the variable clamping force. The deformable mirror can generally be used in any application requiring the focusing or defocusing of light, including with both coherent and incoherent light sources.

  5. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  6. Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.

    A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.

  7. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  8. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  9. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  10. Expressed gene sequence of the IFN-gamma-response chemokine CXCL9 of cattle, horses, and swine

    USDA-ARS?s Scientific Manuscript database

    This report describes the cloning and characterization of expressed gene sequences of bovine, equine, and swine CXCL9 from RNA obtained from peripheral blood mononuclear cell (PBMC) or other tissues. The bovine coding region was 378 nucleotides in length, while the equine and swine coding regions w...

  11. 76 FR 1059 - Publicly Available Mass Market Encryption Software and Other Specified Publicly Available...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    .... 100108014-0121-01] RIN 0694-AE82 Publicly Available Mass Market Encryption Software and Other Specified Publicly Available Encryption Software in Object Code AGENCY: Bureau of Industry and Security, Commerce... encryption object code software with a symmetric key length greater than 64-bits, and ``publicly available...

  12. Compression of Index Term Dictionary in an Inverted-File-Oriented Database: Some Effective Algorithms.

    ERIC Educational Resources Information Center

    Wisniewski, Janusz L.

    1986-01-01

    Discussion of a new method of index term dictionary compression in an inverted-file-oriented database highlights a technique of word coding, which generates short fixed-length codes obtained from the index terms themselves by analysis of monogram and bigram statistical distributions. Substantial savings in communication channel utilization are…

  13. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  14. OPTIMASS: A package for the minimization of kinematic mass functions with constraints

    DOE PAGES

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; ...

    2016-01-07

    Reconstructed mass variables, such as M 2, M 2C, M* T, and M T2 W, play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, Optimass, which interfaces with the Minuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. Here, we describe this code, explain its physics motivation, andmore » demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.« less

  15. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  16. Select injury-related variables are affected by stride length and foot strike style during running.

    PubMed

    Boyer, Elizabeth R; Derrick, Timothy R

    2015-09-01

    Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables that we measured. One foot strike style does not appear universally better than the other; rather, different foot strike styles may predispose runners to different types of injuries. © 2015 The Author(s).

  17. Factors that influence length of stay for in-patient gynaecology surgery: is the Case Mix Group (CMG) or type of procedure more important?

    PubMed

    Carey, Mark S; Victory, Rahi; Stitt, Larry; Tsang, Nicole

    2006-02-01

    To compare the association between the Case Mix Group (CMG) code and length of stay (LOS) with the association between the type of procedure and LOS in patients admitted for gynaecology surgery. We examined the records of women admitted for surgery in CMG 579 (major uterine/adnexal procedure, no malignancy) or 577 (major surgery ovary/adnexa with malignancy) between April 1997 and March 1999. Factors thought to influence LOS included age, weight, American Society of Anesthesiologists (ASA) score, physician, day of the week on which surgery was performed, and procedure type. Procedures were divided into six categories, four for CMG 579 and two for CMG 577. Data were abstracted from the hospital information costing system (T2 system) and by retrospective chart review. Multivariable analysis was performed using linear regression with backwards elimination. There were 606 patients in CMG 579 and 101 patients in CMG 577, and the corresponding median LOS was four days (range 1-19) for CMG 579 and nine days (range 3-30) for CMG 577. Combined analysis of both CMGs 577 and 579 revealed the following factors as highly significant determinants of LOS: procedure, age, physician, and ASA score. Although confounded by procedure type, the CMG did not significantly account for differences in LOS in the model if procedure was considered. Pairwise comparisons of procedure categories were all found to be statistically significant, even when controlled for other important variables. The type of procedure better accounts for differences in LOS by describing six statistically distinct procedure groups rather than the traditional two CMGs. It is reasonable therefore to consider changing the current CMG codes for gynaecology to a classification based on the type of procedure.

  18. Documentation of a numerical code for the simulation of variable density ground-water flow in three dimensions

    USGS Publications Warehouse

    Kuiper, L.K.

    1985-01-01

    A numerical code is documented for the simulation of variable density time dependent groundwater flow in three dimensions. The groundwater density, although variable with distance, is assumed to be constant in time. The Integrated Finite Difference grid elements in the code follow the geologic strata in the modeled area. If appropriate, the determination of hydraulic head in confining beds can be deleted to decrease computation time. The strongly implicit procedure (SIP), successive over-relaxation (SOR), and eight different preconditioned conjugate gradient (PCG) methods are used to solve the approximating equations. The use of the computer program that performs the calculations in the numerical code is emphasized. Detailed instructions are given for using the computer program, including input data formats. An example simulation and the Fortran listing of the program are included. (USGS)

  19. Systematic correlation of environmental exposure and physiological and self-reported behaviour factors with leukocyte telomere length.

    PubMed

    Patel, Chirag J; Manrai, Arjun K; Corona, Erik; Kohane, Isaac S

    2017-02-01

    It is hypothesized that environmental exposures and behaviour influence telomere length, an indicator of cellular ageing. We systematically associated 461 indicators of environmental exposures, physiology and self-reported behaviour with telomere length in data from the US National Health and Nutrition Examination Survey (NHANES) in 1999-2002. Further, we tested whether factors identified in the NHANES participants are also correlated with gene expression of telomere length modifying genes. We correlated 461 environmental exposures, behaviours and clinical variables with telomere length, using survey-weighted linear regression, adjusting for sex, age, age squared, race/ethnicity, poverty level, education and born outside the USA, and estimated the false discovery rate to adjust for multiple hypotheses. We conducted a secondary analysis to investigate the correlation between identified environmental variables and gene expression levels of telomere-associated genes in publicly available gene expression samples. After correlating 461 variables with telomere length, we found 22 variables significantly associated with telomere length after adjustment for multiple hypotheses. Of these varaibales, 14 were associated with longer telomeres, including biomarkers of polychlorinated biphenyls([PCBs; 0.1 to 0.2 standard deviation (SD) increase for 1 SD increase in PCB level, P  < 0.002] and a form of vitamin A, retinyl stearate. Eight variables associated with shorter telomeres, including biomarkers of cadmium, C-reactive protein and lack of physical activity. We could not conclude that PCBs are correlated with gene expression of telomere-associated genes. Both environmental exposures and chronic disease-related risk factors may play a role in telomere length. Our secondary analysis found no evidence of association between PCBs/smoking and gene expression of telomere-associated genes. All correlations between exposures, behaviours and clinical factors and changes in telomere length will require further investigation regarding biological influence of exposure. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association

  20. The Number, Organization, and Size of Polymorphic Membrane Protein Coding Sequences as well as the Most Conserved Pmp Protein Differ within and across Chlamydia Species.

    PubMed

    Van Lent, Sarah; Creasy, Heather Huot; Myers, Garry S A; Vanrompay, Daisy

    2016-01-01

    Variation is a central trait of the polymorphic membrane protein (Pmp) family. The number of pmp coding sequences differs between Chlamydia species, but it is unknown whether the number of pmp coding sequences is constant within a Chlamydia species. The level of conservation of the Pmp proteins has previously only been determined for Chlamydia trachomatis. As different Pmp proteins might be indispensible for the pathogenesis of different Chlamydia species, this study investigated the conservation of Pmp proteins both within and across C. trachomatis,C. pneumoniae,C. abortus, and C. psittaci. The pmp coding sequences were annotated in 16 C. trachomatis, 6 C. pneumoniae, 2 C. abortus, and 16 C. psittaci genomes. The number and organization of polymorphic membrane coding sequences differed within and across the analyzed Chlamydia species. The length of coding sequences of pmpA,pmpB, and pmpH was conserved among all analyzed genomes, while the length of pmpE/F and pmpG, and remarkably also of the subtype pmpD, differed among the analyzed genomes. PmpD, PmpA, PmpH, and PmpA were the most conserved Pmp in C. trachomatis,C. pneumoniae,C. abortus, and C. psittaci, respectively. PmpB was the most conserved Pmp across the 4 analyzed Chlamydia species. © 2016 S. Karger AG, Basel.

  1. An improved and validated RNA HLA class I SBT approach for obtaining full length coding sequences.

    PubMed

    Gerritsen, K E H; Olieslagers, T I; Groeneweg, M; Voorter, C E M; Tilanus, M G J

    2014-11-01

    The functional relevance of human leukocyte antigen (HLA) class I allele polymorphism beyond exons 2 and 3 is difficult to address because more than 70% of the HLA class I alleles are defined by exons 2 and 3 sequences only. For routine application on clinical samples we improved and validated the HLA sequence-based typing (SBT) approach based on RNA templates, using either a single locus-specific or two overlapping group-specific polymerase chain reaction (PCR) amplifications, with three forward and three reverse sequencing reactions for full length sequencing. Locus-specific HLA typing with RNA SBT of a reference panel, representing the major antigen groups, showed identical results compared to DNA SBT typing. Alleles encountered with unknown exons in the IMGT/HLA database and three samples, two with Null and one with a Low expressed allele, have been addressed by the group-specific RNA SBT approach to obtain full length coding sequences. This RNA SBT approach has proven its value in our routine full length definition of alleles. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  3. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  4. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  5. Applying of Decision Tree Analysis to Risk Factors Associated with Pressure Ulcers in Long-Term Care Facilities.

    PubMed

    Moon, Mikyung; Lee, Soo-Kyoung

    2017-01-01

    The purpose of this study was to use decision tree analysis to explore the factors associated with pressure ulcers (PUs) among elderly people admitted to Korean long-term care facilities. The data were extracted from the 2014 National Inpatient Sample (NIS)-data of Health Insurance Review and Assessment Service (HIRA). A MapReduce-based program was implemented to join and filter 5 tables of the NIS. The outcome predicted by the decision tree model was the prevalence of PUs as defined by the Korean Standard Classification of Disease-7 (KCD-7; code L89 * ). Using R 3.3.1, a decision tree was generated with the finalized 15,856 cases and 830 variables. The decision tree displayed 15 subgroups with 8 variables showing 0.804 accuracy, 0.820 sensitivity, and 0.787 specificity. The most significant primary predictor of PUs was length of stay less than 0.5 day. Other predictors were the presence of an infectious wound dressing, followed by having diagnoses numbering less than 3.5 and the presence of a simple dressing. Among diagnoses, "injuries to the hip and thigh" was the top predictor ranking 5th overall. Total hospital cost exceeding 2,200,000 Korean won (US $2,000) rounded out the top 7. These results support previous studies that showed length of stay, comorbidity, and total hospital cost were associated with PUs. Moreover, wound dressings were commonly used to treat PUs. They also show that machine learning, such as a decision tree, could effectively predict PUs using big data.

  6. Migration of patients between five urban teaching hospitals in Chicago.

    PubMed

    Galanter, William L; Applebaum, Andrew; Boddipalli, Viveka; Kho, Abel; Lin, Michael; Meltzer, David; Roberts, Anna; Trick, Bill; Walton, Surrey M; Lambert, Bruce L

    2013-04-01

    To quantify the extent of patient sharing and inpatient care fragmentation among patients discharged from a cohort of Chicago hospitals. Admission and discharge dates and patient ZIP codes from 5 hospitals over 2 years were matched with an encryption algorithm. Admission to more than one hospital was considered fragmented care. The association between fragmentation and socio-economic variables using ZIP-code data from the 2000 US Census was measured. Using validation from one hospital, patient matching using encrypted identifiers had a sensitivity of 99.3 % and specificity of 100 %. The cohort contained 228,151 unique patients and 334,828 admissions. Roughly 2 % of the patients received fragmented care, accounting for 5.8 % of admissions and 6.4 % of hospital days. In 3 of 5 hospitals, and overall, the length of stay of patients with fragmented care was longer than those without. Fragmentation varied by hospital and was associated with the proportion of non-Caucasian persons, the proportion of residents whose income fell in the lowest quartile, and the proportion of residents with more children being raised by mothers alone in the zip code of the patient. Patients receiving fragmented care accounted for 6.4 % of hospital days. This percentage is a low estimate for our region, since not all regional hospitals participated, but high enough to suggest value in creating Health Information Exchange. Fragmentation varied by hospital, per capita income, race and proportion of single mother homes. This secure methodology and fragmentation analysis may prove useful for future analyses.

  7. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  8. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  9. Complete nucleotide sequence of pig (Sus scrofa) mitochondrial genome and dating evolutionary divergence within Artiodactyla.

    PubMed

    Lin, C S; Sun, Y L; Liu, C Y; Yang, P C; Chang, L C; Cheng, I C; Mao, S J; Huang, M C

    1999-08-05

    The complete nucleotide sequence of the pig (Sus scrofa) mitochondrial genome, containing 16613bp, is presented in this report. The genome is not a specific length because of the presence of the variable numbers of tandem repeats, 5'-CGTGCGTACA in the displacement loop (D-loop). Genes responsible for 12S and 16S rRNAs, 22 tRNAs, and 13 protein-coding regions are found. The genome carries very few intergenic nucleotides with several instances of overlap between protein-coding or tRNA genes, except in the D-loop region. For evaluating the possible evolutionary relationships between Artiodactyla and Cetacea, the nucleotide substitutions and amino acid sequences of 13 protein-coding genes were aligned by pairwise comparisons of the pig, cow, and fin whale. By comparing these sequences, we suggest that there is a closer relationship between the pig and cow than that between either of these species and fin whale. In addition, the accumulation of transversions and gaps in pig 12S and 16S rRNA genes was compared with that in other eutherian species, including cow, fin whale, human, horse, and harbor seal. The results also reveal a close phylogenetic relationship between pig and cow, as compared to fin whale and others. Thus, according to the sequence differences of mitochondrial rRNA genes in eutherian species, the evolutionary separation of pig and cow occurred about 53-60 million years ago.

  10. Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.

    1984-01-01

    A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.

  11. Complete mitochondrial genome of a Asian lion (Panthera leo goojratensis).

    PubMed

    Li, Yu-Fei; Wang, Qiang; Zhao, Jian-ning

    2016-01-01

    The entire mitochondrial genome of this Asian lion (Panthera leo goojratensis) was 17,183 bp in length, gene composition and arrangement conformed to other lions, which contained the typical structure of 22 tRNAs, 2 rRNAs, 13 protein-coding genes and a non-coding region. The characteristic of the mitochondrial genome was analyzed in detail.

  12. A Comparison of Six MMPI Short Forms: Code Type Correspondence and Indices of Psychopathology.

    ERIC Educational Resources Information Center

    Willcockson, James C.; And Others

    1983-01-01

    Compared six Minnesota Multiphasic Personality Inventory (MMPI) short forms with the full-length MMPI for ability to identify code-types and indices of psychopathology in renal dialysis patients (N=53) and paranoid schizophrenics (N=58). Results suggested that the accuracy of the short forms fluctuates for different patient populations and…

  13. Two Independent Contributions to Step Variability during Over-Ground Human Walking

    PubMed Central

    Collins, Steven H.; Kuo, Arthur D.

    2013-01-01

    Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308

  14. Effects of Deployment on the Mental Health of Service Members at Fort Hood

    DTIC Science & Technology

    2006-07-06

    167) Once (n= 1498) More Than Once (n=566) Characteristic 11 Percent n Percent n Percent Gender Male 120 71.9 1150 76.8 439 77.6 Female 47 28.1 348...4,5,6,7,9,10,13,14,15,16) and item 2 in the provider section. Gender was coded as ŕ" for female and Ŕ" for male. The remainder of the nominal level variables...Appendix C Variables, Measures, and Coding of Data VARIBLE DESCRIPTION SPSS DATA CODE & SPSS CODE Male Gender Female FEMALE =0, MALE=1 El 01 Wi E2 02 W2 E3

  15. Data and code for the exploratory data analysis of the electrical energy demand in the time domain in Greece.

    PubMed

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2017-08-01

    We present data and code for visualizing the electrical energy data and weather-, climate-related and socioeconomic variables in the time domain in Greece. The electrical energy data include hourly demand, weekly-ahead forecasted values of the demand provided by the Greek Independent Power Transmission Operator and pricing values in Greece. We also present the daily temperature in Athens and the Gross Domestic Product of Greece. The code combines the data to a single report, which includes all visualizations with combinations of all variables in multiple time scales. The data and code were used in Tyralis et al. (2017) [1].

  16. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  17. Subspace Arrangement Codes and Cryptosystems

    DTIC Science & Technology

    2011-05-09

    any other prov1sion of law, no person shall be subject to any penalty for failing to comply w1th a collection of information if it does not display a...NUMBER OF PAGES 49 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std...theory is finding codes that have a small number of digits (length) with a high number codewords (dimension), as well as good error-correction properties

  18. Observations on Polar Coding with CRC-Aided List Decoding

    DTIC Science & Technology

    2016-09-01

    9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector

  19. Analysis and Simulation of Narrowband GPS Jamming Using Digital Excision Temporal Filtering.

    DTIC Science & Technology

    1994-12-01

    the sequence of stored values from the P- code sampled at a 20 MHz rate. When correlated with a reference vector of the same length to simulate a GPS ...rate required for the GPS signals, (20 MHz sampling rate for the P- code signal), the personal computer (PC) used run the simulation could not perform...This subroutine is used to perform a fast FFT based 168 biased cross correlation . Written by Capt Gerry Falen, USAF, 16 AUG 94 % start of code

  20. Buccal telomere length and its associations with cortisol, heart rate variability, heart rate, and blood pressure responses to an acute social evaluative stressor in college students.

    PubMed

    Woody, Alex; Hamilton, Katrina; Livitz, Irina E; Figueroa, Wilson S; Zoccola, Peggy M

    2017-05-01

    Understanding the relationship between stress and telomere length (a marker of cellular aging) is of great interest for reducing aging-related disease and death. One important aspect of acute stress exposure that may underlie detrimental effects on health is physiological reactivity to the stressor. This study tested the relationship between buccal telomere length and physiological reactivity (salivary cortisol reactivity and total output, heart rate (HR) variability, blood pressure, and HR) to an acute psychosocial stressor in a sample of 77 (53% male) healthy young adults. Consistent with predictions, greater reductions in HR variability (HRV) in response to a stressor and greater cortisol output during the study session were associated with shorter relative buccal telomere length (i.e. greater cellular aging). However, the relationship between cortisol output and buccal telomere length became non-significant when adjusting for medication use. Contrary to past findings and study hypotheses, associations between cortisol, blood pressure, and HR reactivity and relative buccal telomere length were not significant. Overall, these findings may indicate there are limited and mixed associations between stress reactivity and telomere length across physiological systems.

  1. The role of feedback information for calibration and attunement in perceiving length by dynamic touch.

    PubMed

    Withagen, Rob; Michaels, Claire F

    2005-12-01

    Two processes have been hypothesized to underlie improvement in perception: attunement and calibration. These processes were examined in a dynamic touch paradigm in which participants were asked to report the lengths of unseen, wielded rods differing in length, diameter, and material. Two experiments addressed whether feedback informs about the need for reattunement and recalibration. Feedback indicating actual length induced both recalibration and reattunement. Recalibration did not occur when feedback indicated only whether 2 rods were of the same length or of different lengths. Such feedback, however, did induce reattunement. These results suggest that attunement and calibration are dissociable processes and that feedback informs which is needed. The observed change in variable use has implications also for research on what mechanical variables underlie length perception by dynamic touch. (c) 2005 APA, all rights reserved.

  2. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    NASA Astrophysics Data System (ADS)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  3. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  4. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  5. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  6. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  7. Tag retention, growth, and survival of red swamp crayfish Procambarus clarkii marked with coded wire tags

    USGS Publications Warehouse

    Isely, J.J.; Eversole, A.G.

    1998-01-01

    Juvenile red swamp crayfish (or crawfish), Procambarus clarkii (20-41 mm in total length) were collected from a crayfish culture pond by dipnetting and tagged with sequentially numbered, standard length, binary-coded wire tags. Four replicates of 50 crayfish were impaled perpendicular to the long axis of the abdomen with a fixed needle. Tags were injected transversely into the ventral surface of the first or second abdominal segment and were imbedded in the musculature just beneath the abdominal sternum. Tags were visible upon inspection. Additionally, two replicates of 50 crayfish were not tagged and were used as controls. Growth, survival, and tag retention were evaluated after 7 d in individual containers, after 100 d in aquaria, and after 200 d in field cages. Tag retention during each sample period was 100%, and average mortality of tagged crayfish within 7 d of tagging was 1%. Mortality during the remainder of the study was high (75-91%) but was similar between treatment and control samples. Most of the deaths were probably due to cannibalism. Average total length increased threefold during the course of the study, and crayfish reached maturity. Because crayfish were mature by the end of the study, we concluded that the coded wire tag was retained through the life history of the crayfish.

  8. XGlycScan: An Open-source Software For N-linked Glycosite Assignment, Quantification and Quality Assessment of Data from Mass Spectrometry-based Glycoproteomic Analysis.

    PubMed

    Aiyetan, Paul; Zhang, Bai; Zhang, Zhen; Zhang, Hui

    2014-01-01

    Mass spectrometry based glycoproteomics has become a major means of identifying and characterizing previously N-linked glycan attached loci (glycosites). In the bottom-up approach, several factors which include but not limited to sample preparation, mass spectrometry analyses, and protein sequence database searches result in previously N-linked peptide spectrum matches (PSMs) of varying lengths. Given that multiple PSM scan map to a glycosite, we reason that identified PSMs are varying length peptide species of a unique set of glycosites. Because associated spectra of these PSMs are typically summed separately, true glycosite associated spectra counts are lost or complicated. Also, these varying length peptide species complicate protein inference as smaller sized peptide sequences are more likely to map to more proteins than larger sized peptides or actual glycosite sequences. Here, we present XGlycScan. XGlycScan maps varying length peptide species to glycosites to facilitate an accurate quantification of glycosite associated spectra counts. We observed that this reduced the variability in reported identifications of mass spectrometry technical replicates of our sample dataset. We also observed that mapping identified peptides to glycosites provided an assessment of search-engine identification. Inherently, XGlycScan reported glycosites reduce the complexity in protein inference. We implemented XGlycScan in the platform independent Java programing language and have made it available as open source. XGlycScan's source code is freely available at https://bitbucket.org/paiyetan/xglycscan/src and its compiled binaries and documentation can be freely downloaded at https://bitbucket.org/paiyetan/xglycscan/downloads. The graphical user interface version can also be found at https://bitbucket.org/paiyetan/xglycscangui/src and https://bitbucket.org/paiyetan/xglycscangui/downloads respectively.

  9. Variable TERRA abundance and stability in cervical cancer cells.

    PubMed

    Oh, Bong-Kyeong; Keo, Ponnarath; Bae, Jaeman; Ko, Jung Hwa; Choi, Joong Sub

    2017-06-01

    Telomeres are transcribed into long non-coding RNA, referred to as telomeric repeat-containing RNA (TERRA), which plays important roles in maintaining telomere integrity and heterochromatin formation. TERRA has been well characterized in HeLa cells, a type of cervical cancer cell. However, TERRA abundance and stability have not been examined in other cervical cancer cells, at least to the best of our knowledge. Thus, in this study, we measured TERRA levels and stability, as well as telomere length in 6 cervical cancer cell lines, HeLa, SiHa, CaSki, HeLa S3, C-33A and SNU-17. We also examined the association between the TERRA level and its stability and telomere length. We found that the TERRA level was several fold greater in the SiHa, CaSki, HeLa S3, C-33A and SNU-17 cells, than in the HeLa cells. An RNA stability assay of actinomycin D-treated cells revealed that TERRA had a short half-life of ~4 h in HeLa cells, which was consistent with previous studies, but was more stable with a longer half-life (>8 h) in the other 5 cell lines. Telomere length varied from 4 to 9 kb in the cells and did not correlate significantly with the TERRA level. On the whole, our data indicate that TERRA abundance and stability vary between different types of cervical cancer cells. TERRA degrades rapidly in HeLa cells, but is maintained stably in other cervical cancer cells that accumulate higher levels of TERRA. TERRA abundance is associated with the stability of RNA in cervical cancer cells, but is unlikely associated with telomere length.

  10. Distinctive mitochondrial genome of Calanoid copepod Calanus sinicus with multiple large non-coding regions and reshuffled gene order: Useful molecular markers for phylogenetic and population studies

    PubMed Central

    2011-01-01

    Background Copepods are highly diverse and abundant, resulting in extensive ecological radiation in marine ecosystems. Calanus sinicus dominates continental shelf waters in the northwest Pacific Ocean and plays an important role in the local ecosystem by linking primary production to higher trophic levels. A lack of effective molecular markers has hindered phylogenetic and population genetic studies concerning copepods. As they are genome-level informative, mitochondrial DNA sequences can be used as markers for population genetic studies and phylogenetic studies. Results The mitochondrial genome of C. sinicus is distinct from other arthropods owing to the concurrence of multiple non-coding regions and a reshuffled gene arrangement. Further particularities in the mitogenome of C. sinicus include low A + T-content, symmetrical nucleotide composition between strands, abbreviated stop codons for several PCGs and extended lengths of the genes atp6 and atp8 relative to other copepods. The monophyletic Copepoda should be placed within the Vericrustacea. The close affinity between Cyclopoida and Poecilostomatoida suggests reassigning the latter as subordinate to the former. Monophyly of Maxillopoda is rejected. Within the alignment of 11 C. sinicus mitogenomes, there are 397 variable sites harbouring three 'hotspot' variable sites and three microsatellite loci. Conclusion The occurrence of the circular subgenomic fragment during laboratory assays suggests that special caution should be taken when sequencing mitogenomes using long PCR. Such a phenomenon may provide additional evidence of mitochondrial DNA recombination, which appears to have been a prerequisite for shaping the present mitochondrial profile of C. sinicus during its evolution. The lack of synapomorphic gene arrangements among copepods has cast doubt on the utility of gene order as a useful molecular marker for deep phylogenetic analysis. However, mitochondrial genomic sequences have been valuable markers for resolving phylogenetic issues concerning copepods. The variable site maps of C. sinicus mitogenomes provide a solid foundation for population genetic studies. PMID:21269523

  11. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  12. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  13. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  14. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  15. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  16. Length and sequence variability in mitochondrial control region of the milkfish, Chanos chanos.

    PubMed

    Ravago, Rachel G; Monje, Virginia D; Juinio-Meñez, Marie Antonette

    2002-01-01

    Extensive length variability was observed in the mitochondrial control region of the milkfish, Chanos chanos. The nucleotide sequence of the control region and flanking regions was determined. Length variability and heteroplasmy was due to the presence of varying numbers of a 41-bp tandemly repeated sequence and a 48-bp insertion/deletion (indel). The structure and organization of the milkfish control region is similar to that of other teleost fish and vertebrates. However, extensive variation in the copy number of tandem repeats (4-20 copies) and the presence of a relatively large (48-bp) indel, are apparently uncommon in teleost fish control region sequences reported to date. High sequence variability of control region peripheral domains indicates the potential utility of selected regions as markers for population-level studies.

  17. Pediatric severe sepsis in U.S. children's hospitals.

    PubMed

    Balamuth, Fran; Weiss, Scott L; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Hayes, Katie; Gaieski, David; Hall, Matt; Shah, Samir S; Alpern, Elizabeth R

    2014-11-01

    To compare the prevalence, resource utilization, and mortality for pediatric severe sepsis identified using two established identification strategies. Observational cohort study from 2004 to 2012. Forty-four pediatric hospitals contributing data to the Pediatric Health Information Systems database. Children 18 years old or younger. We identified patients with severe sepsis or septic shock by using two International Classification of Diseases, 9th edition, Clinical Modification-based coding strategies: 1) combinations of International Classification of Diseases, 9th edition, Clinical Modification codes for infection plus organ dysfunction (combination code cohort); 2) International Classification of Diseases, 9th edition, Clinical Modification codes for severe sepsis and septic shock (sepsis code cohort). Outcomes included prevalence of severe sepsis, as well as hospital and ICU length of stay, and mortality. Outcomes were compared between the two cohorts examining aggregate differences over the study period and trends over time. The combination code cohort identified 176,124 hospitalizations (3.1% of all hospitalizations), whereas the sepsis code cohort identified 25,236 hospitalizations (0.45%), a seven-fold difference. Between 2004 and 2012, the prevalence of sepsis increased from 3.7% to 4.4% using the combination code cohort and from 0.4% to 0.7% using the sepsis code cohort (p < 0.001 for trend in each cohort). Length of stay (hospital and ICU) and costs decreased in both cohorts over the study period (p < 0.001). Overall, hospital mortality was higher in the sepsis code cohort than the combination code cohort (21.2% [95% CI, 20.7-21.8] vs 8.2% [95% CI, 8.0-8.3]). Over the 9-year study period, there was an absolute reduction in mortality of 10.9% (p < 0.001) in the sepsis code cohort and 3.8% (p < 0.001) in the combination code cohort. Prevalence of pediatric severe sepsis increased in the studied U.S. children's hospitals over the past 9 years, whereas resource utilization and mortality decreased. Epidemiologic estimates of pediatric severe sepsis varied up to seven-fold depending on the strategy used for case ascertainment.

  18. Non-White, No More: Effect Coding as an Alternative to Dummy Coding with Implications for Higher Education Researchers

    ERIC Educational Resources Information Center

    Mayhew, Matthew J.; Simonoff, Jeffrey S.

    2015-01-01

    The purpose of this article is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, race-based independent variables in higher education research. Unlike indicator (dummy) codes that imply that one group will be a reference group, effect codes use average responses as a means for…

  19. Variable-Period Undulators For Synchrotron Radiation

    DOEpatents

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high-energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  20. [Long non-coding RNAs in the pathophysiology of atherosclerosis].

    PubMed

    Novak, Jan; Vašků, Julie Bienertová; Souček, Miroslav

    2018-01-01

    The human genome contains about 22 000 protein-coding genes that are transcribed to an even larger amount of messenger RNAs (mRNA). Interestingly, the results of the project ENCODE from 2012 show, that despite up to 90 % of our genome being actively transcribed, protein-coding mRNAs make up only 2-3 % of the total amount of the transcribed RNA. The rest of RNA transcripts is not translated to proteins and that is why they are referred to as "non-coding RNAs". Earlier the non-coding RNA was considered "the dark matter of genome", or "the junk", whose genes has accumulated in our DNA during the course of evolution. Today we already know that non-coding RNAs fulfil a variety of regulatory functions in our body - they intervene into epigenetic processes from chromatin remodelling to histone methylation, or into the transcription process itself, or even post-transcription processes. Long non-coding RNAs (lncRNA) are one of the classes of non-coding RNAs that have more than 200 nucleotides in length (non-coding RNAs with less than 200 nucleotides in length are called small non-coding RNAs). lncRNAs represent a widely varied and large group of molecules with diverse regulatory functions. We can identify them in all thinkable cell types or tissues, or even in an extracellular space, which includes blood, specifically plasma. Their levels change during the course of organogenesis, they are specific to different tissues and their changes also occur along with the development of different illnesses, including atherosclerosis. This review article aims to present lncRNAs problematics in general and then focuses on some of their specific representatives in relation to the process of atherosclerosis (i.e. we describe lncRNA involvement in the biology of endothelial cells, vascular smooth muscle cells or immune cells), and we further describe possible clinical potential of lncRNA, whether in diagnostics or therapy of atherosclerosis and its clinical manifestations.Key words: atherosclerosis - lincRNA - lncRNA - MALAT - MIAT.

  1. The Prehospital Sepsis Project: out-of-hospital physiologic predictors of sepsis outcomes.

    PubMed

    Baez, Amado Alejandro; Hanudel, Priscilla; Wilcox, Susan Renee

    2013-12-01

    Severe sepsis and septic shock are common, expensive and often fatal medical problems. The care of the critically sick and injured often begins in the prehospital setting; there is limited data available related to predictors and interventions specific to sepsis in the prehospital arena. The objective of this study was to assess the predictive effect of physiologic elements commonly reported in the out-of-hospital setting in the outcomes of patients transported with sepsis. This was a cross-sectional descriptive study. Data from the years 2004-2006 were collected. Adult cases (≥18 years of age) transported by Emergency Medical Services to a major academic center with the diagnosis of sepsis as defined by ICD-9-CM diagnostic codes were included. Descriptive statistics and standard deviations were used to present group characteristics. Chi-square was used for statistical significance and odds ratio (OR) to assess strength of association. Statistical significance was set at the .05 level. Physiologic variables studied included mean arterial pressure (MAP), heart rate (HR), respiratory rate (RR) and shock index (SI). Sixty-three (63) patients were included. Outcome variables included a mean hospital length of stay (HLOS) of 13.75 days (SD = 9.97), mean ventilator days of 4.93 (SD = 7.87), in-hospital mortality of 22 out of 63 (34.9%), and mean intensive care unit length-of-stay (ICU-LOS) of 7.02 days (SD = 7.98). Although SI and RR were found to predict intensive care unit (ICU) admissions, [OR 5.96 (CI, 1.49-25.78; P = .003) and OR 4.81 (CI, 1.16-21.01; P = .0116), respectively] none of the studied variables were found to predict mortality (MAP <65 mmHg: P = .39; HR >90: P = .60; RR >20 P = .11; SI >0.7 P = .35). This study demonstrated that the out-of-hospital shock index and respiratory rate have high predictability for ICU admission. Further studies should include the development of an out-of-hospital sepsis score.

  2. Relationship between mRNA secondary structure and sequence variability in Chloroplast genes: possible life history implications.

    PubMed

    Krishnan, Neeraja M; Seligmann, Hervé; Rao, Basuthkar J

    2008-01-28

    Synonymous sites are freer to vary because of redundancy in genetic code. Messenger RNA secondary structure restricts this freedom, as revealed by previous findings in mitochondrial genes that mutations at third codon position nucleotides in helices are more selected against than those in loops. This motivated us to explore the constraints imposed by mRNA secondary structure on evolutionary variability at all codon positions in general, in chloroplast systems. We found that the evolutionary variability and intrinsic secondary structure stability of these sequences share an inverse relationship. Simulations of most likely single nucleotide evolution in Psilotum nudum and Nephroselmis olivacea mRNAs, indicate that helix-forming propensities of mutated mRNAs are greater than those of the natural mRNAs for short sequences and vice-versa for long sequences. Moreover, helix-forming propensity estimated by the percentage of total mRNA in helices increases gradually with mRNA length, saturating beyond 1000 nucleotides. Protection levels of functionally important sites vary across plants and proteins: r-strategists minimize mutation costs in large genes; K-strategists do the opposite. Mrna length presumably predisposes shorter mRNAs to evolve under different constraints than longer mRNAs. The positive correlation between secondary structure protection and functional importance of sites suggests that some sites might be conserved due to packing-protection constraints at the nucleic acid level in addition to protein level constraints. Consequently, nucleic acid secondary structure a priori biases mutations. The converse (exposure of conserved sites) apparently occurs in a smaller number of cases, indicating a different evolutionary adaptive strategy in these plants. The differences between the protection levels of functionally important sites for r- and K-strategists reflect their respective molecular adaptive strategies. These converge with increasing domestication levels of K-strategists, perhaps because domestication increases reproductive output.

  3. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  4. Action-oriented colour-coded foot length calliper for primary healthcare workers as a proxy for birth weight & gestational period

    PubMed Central

    Pratinidhi, Asha K.; Bagade, Abhijit C.; Kakade, Satish V.; Kale, Hemangi P.; Kshirsagar, Vinayak Y.; Babar, Rohini; Bagal, Shilpa

    2017-01-01

    Background & objectives: Foot length of the newborn has a good correlation with the birth weight and is recommended to be used as a proxy measure. There can be variations in the measurement of foot length. A study was, therefore, carried out to develop a foot length calliper for accurate foot length measurement and to find cut-off values for birth weight and gestational age groups to be used by primary healthcare workers. Methods: This study was undertaken on 645 apparently healthy newborn infants with known gestational age. Nude birth weight was taken within 24 h of birth on a standard electronic weighing machine. A foot length calliper was developed. Correlation between foot length and birth weight as well as gestational age was calculated. Correctness of cut-off values was tested using another set of 133 observations on the apparently healthy newborns. Action-oriented colour coding was done to make it easy for primary healthcare workers to use it. Results: There was a significant correlation of foot length with birth weight (r=0.75) and gestational age (r=0.63). Cut-off values for birth weight groups were 6.1, 6.8 and 7.3 cm and for gestational age of 6.1, 6.8 and 7.0 cm. Correctness of these cut-off values ranged between 77.1 and 95.7 per cent for birth weight and 60-93.3 per cent for gestational age. Considering 2.5 kg as cut-off between normal birth weight and low birth weight (LBW), cut-off values of 6.1, 6.8 and 7.3 were chosen. Action-oriented colour coding was done by superimposing the colours on the scale of the calliper, green indicating home care, yellow indicating supervised home care, orange indicating care at newborn care units at primary health centres and red indicating Neonatal Intensive Care Unit care for infants. Interpretation & conclusions: A simple device was developed so that the primary health care workers and trained Accredited Social Health Activist workers can identify the risk of LBW in the absence of accurate weighing facilities and decide on the type of care needed by the newborn and take action accordingly. PMID:28749397

  5. Run-length encoding graphic rules, biochemically editable designs and steganographical numeric data embedment for DNA-based cryptographical coding system.

    PubMed

    Kawano, Tomonori

    2013-03-01

    There have been a wide variety of approaches for handling the pieces of DNA as the "unplugged" tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given "passwords" and/or secret numbers using DNA sequences. The "passwords" of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original "passwords." The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed.

  6. Variability and trends in dry day frequency and dry event length in the southwestern United States

    USGS Publications Warehouse

    McCabe, Gregory J.; Legates, David R.; Lins, Harry F.

    2010-01-01

    Daily precipitation from 22 National Weather Service first-order weather stations in the southwestern United States for water years 1951 through 2006 are used to examine variability and trends in the frequency of dry days and dry event length. Dry events with minimum thresholds of 10 and 20 consecutive days of precipitation with less than 2.54 mm are analyzed. For water years and cool seasons (October through March), most sites indicate negative trends in dry event length (i.e., dry event durations are becoming shorter). For the warm season (April through September), most sites also indicate negative trends; however, more sites indicate positive trends in dry event length for the warm season than for water years or cool seasons. The larger number of sites indicating positive trends in dry event length during the warm season is due to a series of dry warm seasons near the end of the 20th century and the beginning of the 21st century. Overall, a large portion of the variability in dry event length is attributable to variability of the El Niño–Southern Oscillation, especially for water years and cool seasons. Our results are consistent with analyses of trends in discharge for sites in the southwestern United States, an increased frequency in El Niño events, and positive trends in precipitation in the southwestern United States.

  7. Within-Event and Between-Events Ground Motion Variability from Earthquake Rupture Scenarios

    NASA Astrophysics Data System (ADS)

    Crempien, Jorge G. F.; Archuleta, Ralph J.

    2017-09-01

    Measurement of ground motion variability is essential to estimate seismic hazard. Over-estimation of variability can lead to extremely high annual hazard estimates of ground motion exceedance. We explore different parameters that affect the variability of ground motion such as the spatial correlations of kinematic rupture parameters on a finite fault and the corner frequency of the moment-rate spectra. To quantify the variability of ground motion, we simulate kinematic rupture scenarios on several vertical strike-slip faults and compute ground motion using the representation theorem. In particular, for the entire suite of rupture scenarios, we quantify the within-event and the between-events ground motion variability of peak ground acceleration (PGA) and response spectra at several periods, at 40 stations—all approximately at an equal distance of 20 and 50 km from the fault. Both within-event and between-events ground motion variability increase when the slip correlation length on the fault increases. The probability density functions of ground motion tend to truncate at a finite value when the correlation length of slip decreases on the fault, therefore, we do not observe any long-tail distribution of peak ground acceleration when performing several rupture simulations for small correlation lengths. Finally, for a correlation length of 6 km, the within-event and between-events PGA log-normal standard deviations are 0.58 and 0.19, respectively, values slightly smaller than those reported by Boore et al. (Earthq Spectra, 30(3):1057-1085, 2014). The between-events standard deviation is consistently smaller than the within-event for all correlations lengths, a feature that agrees with recent ground motion prediction equations.

  8. Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis

    NASA Technical Reports Server (NTRS)

    Nayani, Sudheer N.; Campbell, Richard L.

    2013-01-01

    Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.

  9. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  10. Complete mitochondrial genome sequence of the heart failure model of cardiomyopathic Syrian hamster (Mesocricetus auratus).

    PubMed

    Hu, Bo; Liu, Dong-Xing; Zhang, Yu-Qing; Song, Jian-Tao; Ji, Xian-Fei; Hou, Zhi-Qiang; Zhang, Zhen-Hai

    2016-05-01

    In this study we sequenced the complete mitochondrial genome sequencing of a heart failure model of cardiomyopathic Syrian hamster (Mesocricetus auratus) for the first time. The total length of the mitogenome was 16,267 bp. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region.

  11. An Empirical Test of the Modified C Index and SII, O*NET, and DHOC Occupational Code Classifications

    ERIC Educational Resources Information Center

    Dik, Bryan J.; Hu, Ryan S. C.; Hansen, Jo-Ida C.

    2007-01-01

    The present study investigated new approaches for assessing Holland's congruence hypothesis by (a) developing and applying four sets of decision rules for assigning Holland codes of varying lengths for purposes of computing Eggerth and Andrew's modified C index; (b) testing the modified C index computed using these four approaches against Brown…

  12. Numerical Predictions of Damage and Failure in Carbon Fiber Reinforced Laminates Using a Thermodynamically-Based Work Potential Theory

    NASA Technical Reports Server (NTRS)

    Pineda, Evan Jorge; Waas, Anthony M.

    2013-01-01

    A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, referred to as enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Consistent characteristic lengths are introduced into the formulation to govern the evolution of the failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs are derived. The theory is implemented into a commercial finite element code. The model is verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared against the experimental results.

  13. Numerical solution of a multi-ion one-potential model for electroosmotic flow in two-dimensional rectangular microchannels.

    PubMed

    Van Theemsche, Achim; Deconinck, Johan; Van den Bossche, Bart; Bortels, Leslie

    2002-10-01

    A new more general numerical model for the simulation of electrokinetic flow in rectangular microchannels is presented. The model is based on the dilute solution model and the Navier-Stokes equations and has been implemented in a finite-element-based C++ code. The model includes the ion distribution in the Helmholtz double layer and considers only one single electrical' potential field variable throughout the domain. On a charged surface(s) the surface charge density, which is proportional to the local electrical field, is imposed. The zeta potential results, then, from this boundary condition and depends on concentrations, temperature, ion valence, molecular diffusion coefficients, and geometric conditions. Validation cases show that the model predicts accurately known analytical results, also for geometries having dimensions comparable to the Debye length. As a final study, the electro-osmotic flow in a controlled cross channel is investigated.

  14. Thermal radiation characteristics of nonisothermal cylindrical enclosures using a numerical ray tracing technique

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1990-01-01

    Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.

  15. Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle

    NASA Technical Reports Server (NTRS)

    Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.

    2003-01-01

    A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.

  16. Characterization of the Fb-Nof Transposable Element of Drosophila Melanogaster

    PubMed Central

    Harden, N.; Ashburner, M.

    1990-01-01

    FB-NOF is a composite transposable element of Drosophila melanogaster. It is composed of foldback sequences, of variable length, which flank a 4-kb NOF sequence with 308-bp inverted repeat termini. The NOF sequence could potentially code for a 120-kD polypeptide. The FB-NOF element is responsible for unstable mutations of the white gene (w(c) and w(DZL)) and is associated with the large TEs of G. Ising. Although most strains of D. melanogaster have 20-30 sites of FB insertion, FB-NOF elements are usually rare, many strains lack this composite element or have only one copy of it. A few strains, including w(DZL) and Basc have many (8-21) copies of FB-NOF, and these show a tendency to insert at ``hot-spots.'' These strains also have an increased number of FB elements. The DNA sequence of the NOF region associated with TE146(Z) has been determined. PMID:2174013

  17. SWIFT: SPH With Inter-dependent Fine-grained Tasking

    NASA Astrophysics Data System (ADS)

    Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.

    2018-05-01

    SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.

  18. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    USDA-ARS?s Scientific Manuscript database

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  19. Do diagnosis-related groups appropriately explain variations in costs and length of stay of hip replacement? A comparative assessment of DRG systems across 10 European countries.

    PubMed

    Geissler, Alexander; Scheller-Kreinsen, David; Quentin, Wilm

    2012-08-01

    This paper assesses the variations in costs and length of stay for hip replacement cases in Austria, England, Estonia, Finland, France, Germany, Ireland, Poland, Spain and Sweden and examines the ability of national diagnosis-related group (DRG) systems to explain the variation in resource use against a set of patient characteristic and treatment specific variables. In total, 195,810 cases clustered in 712 hospitals were analyzed using OLS fixed effects models for cost data (n=125,698) and negative binominal models for length-of-stay data (n=70,112). The number of DRGs differs widely across the 10 European countries (range: 2-14). Underlying this wide range is a different use of classification variables, especially secondary diagnoses and treatment options are considered to a different extent. In six countries, a standard set of patient characteristics and treatment variables explain the variation in costs or length of stay better than the DRG variables. This raises questions about the adequacy of the countries' DRG system or the lack of specific criteria, which could be used as classification variables. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Nonequilibrium air radiation (Nequair) program: User's manual

    NASA Technical Reports Server (NTRS)

    Park, C.

    1985-01-01

    A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.

  1. Evolutionary Models of Red Supergiants: Evidence for A Metallicity-dependent Mixing Length and Implications for Type IIP Supernova Progenitors

    NASA Astrophysics Data System (ADS)

    Chun, Sang-Hyun; Yoon, Sung-Chul; Jung, Moo-Keon; Kim, Dong Uk; Kim, Jihoon

    2018-01-01

    Recent studies on the temperatures of red supergiants (RSGs) in the local universe provide us with an excellent observational constraint on RSG models. We calibrate the mixing length parameter by comparing model predictions with the empirical RSG temperatures in Small and Large Magellanic Clouds, Milky Way, and M31, which are inferred from the TiO band and the spectral energy distribution (SED). Although our RSG models are computed with the MESA code, our result may be applied to other stellar evolution codes, including the BEC and TWIN codes. We find evidence that the mixing length increases with increasing metallicity for both cases where the TiO and SED temperatures of RSGs are used for the calibration. Together with the recent finding of a similar correlation in low-mass red giants by Tayar et al., this implies that the metallicity dependence of the mixing length is a universal feature in post-main sequence stars of both low and high masses. Our result implies that typical Type IIP supernova (SN IIP) progenitors with initial masses of ∼ 10{--}16 {M}ȯ have a radius range of 400 {R}ȯ ≲ R≲ 800 {R}ȯ regardless of metallicity. As an auxiliary result of this study, we find that the hydrogen-rich envelope mass of SN IIP progenitors for a given initial mass is predicted to be largely independent of metallicity if the Ledoux criterion with slow semiconvection is adopted, while the Schwarzschild models predict systematically more massive hydrogen-rich envelopes for lower metallicity.

  2. Coding stimulus amplitude by correlated neural activity

    NASA Astrophysics Data System (ADS)

    Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.

    2015-04-01

    While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.

  3. Expression-Linked Patterns of Codon Usage, Amino Acid Frequency, and Protein Length in the Basally Branching Arthropod Parasteatoda tepidariorum

    PubMed Central

    Whittle, Carrie A.; Extavour, Cassandra G.

    2016-01-01

    Abstract Spiders belong to the Chelicerata, the most basally branching arthropod subphylum. The common house spider, Parasteatoda tepidariorum, is an emerging model and provides a valuable system to address key questions in molecular evolution in an arthropod system that is distinct from traditionally studied insects. Here, we provide evidence suggesting that codon usage, amino acid frequency, and protein lengths are each influenced by expression-mediated selection in P. tepidariorum. First, highly expressed genes exhibited preferential usage of T3 codons in this spider, suggestive of selection. Second, genes with elevated transcription favored amino acids with low or intermediate size/complexity (S/C) scores (glycine and alanine) and disfavored those with large S/C scores (such as cysteine), consistent with the minimization of biosynthesis costs of abundant proteins. Third, we observed a negative correlation between expression level and coding sequence length. Together, we conclude that protein-coding genes exhibit signals of expression-related selection in this emerging, noninsect, arthropod model. PMID:27017527

  4. Complete mitochondrial genome of the Yellow-spotted skate Okamejei hollandi (Rajiformes: Rajidae).

    PubMed

    Li, Weidong; Chen, Xiao; Liu, Wenai; Sun, Renjie; Zhou, Haolang

    2016-07-01

    The complete mitochondrial genome of the Yellow-spotted skate Okamejei hollandi was determined in this study. It is 16,974 bp in length and contains 13 protein-coding genes, two rRNA genes, 22 tRNA genes, and one putative control region. The overall base composition is 30.5% A, 27.8% C, 14.0% G, and 27.8% T. There are 28 bp short intergenic spaces located in 12 gene junctions and 31 bp overlaps located in nine gene junctions in the whole mitogenome. Two start codons (ATG and GTG) and two stop codons (TAG and TAA/T) were used in the protein-coding genes. The lengths of 22 tRNA genes range from 68 (tRNA-Ser2) to 75 (tRNA-Leu1) bp. The origin of L-strand replication (OL) sequence (37 bp) was identified between the tRNA-Asn and tRNA-Cys genes. The control region is 1311 bp in length with high A + T and poor G content.

  5. Association of Day Length and Weather Conditions with Physical Activity Levels in Older Community Dwelling People

    PubMed Central

    Witham, Miles D.; Donnan, Peter T.; Vadiveloo, Thenmalar; Sniehotta, Falko F.; Crombie, Iain K.; Feng, Zhiqiang; McMurdo, Marion E. T.

    2014-01-01

    Background Weather is a potentially important determinant of physical activity. Little work has been done examining the relationship between weather and physical activity, and potential modifiers of any relationship in older people. We therefore examined the relationship between weather and physical activity in a cohort of older community-dwelling people. Methods We analysed prospectively collected cross-sectional activity data from community-dwelling people aged 65 and over in the Physical Activity Cohort Scotland. We correlated seven day triaxial accelerometry data with daily weather data (temperature, day length, sunshine, snow, rain), and a series of potential effect modifiers were tested in mixed models: environmental variables (urban vs rural dwelling, percentage of green space), psychological variables (anxiety, depression, perceived behavioural control), social variables (number of close contacts) and health status measured using the SF-36 questionnaire. Results 547 participants, mean age 78.5 years, were included in this analysis. Higher minimum daily temperature and longer day length were associated with higher activity levels; these associations remained robust to adjustment for other significant associates of activity: age, perceived behavioural control, number of social contacts and physical function. Of the potential effect modifier variables, only urban vs rural dwelling and the SF-36 measure of social functioning enhanced the association between day length and activity; no variable modified the association between minimum temperature and activity. Conclusions In older community dwelling people, minimum temperature and day length were associated with objectively measured activity. There was little evidence for moderation of these associations through potentially modifiable health, environmental, social or psychological variables. PMID:24497925

  6. Association of day length and weather conditions with physical activity levels in older community dwelling people.

    PubMed

    Witham, Miles D; Donnan, Peter T; Vadiveloo, Thenmalar; Sniehotta, Falko F; Crombie, Iain K; Feng, Zhiqiang; McMurdo, Marion E T

    2014-01-01

    Weather is a potentially important determinant of physical activity. Little work has been done examining the relationship between weather and physical activity, and potential modifiers of any relationship in older people. We therefore examined the relationship between weather and physical activity in a cohort of older community-dwelling people. We analysed prospectively collected cross-sectional activity data from community-dwelling people aged 65 and over in the Physical Activity Cohort Scotland. We correlated seven day triaxial accelerometry data with daily weather data (temperature, day length, sunshine, snow, rain), and a series of potential effect modifiers were tested in mixed models: environmental variables (urban vs rural dwelling, percentage of green space), psychological variables (anxiety, depression, perceived behavioural control), social variables (number of close contacts) and health status measured using the SF-36 questionnaire. 547 participants, mean age 78.5 years, were included in this analysis. Higher minimum daily temperature and longer day length were associated with higher activity levels; these associations remained robust to adjustment for other significant associates of activity: age, perceived behavioural control, number of social contacts and physical function. Of the potential effect modifier variables, only urban vs rural dwelling and the SF-36 measure of social functioning enhanced the association between day length and activity; no variable modified the association between minimum temperature and activity. In older community dwelling people, minimum temperature and day length were associated with objectively measured activity. There was little evidence for moderation of these associations through potentially modifiable health, environmental, social or psychological variables.

  7. The complete mitochondrial genome of the bagarius yarrelli from honghe river

    NASA Astrophysics Data System (ADS)

    Du, M.; Zhou, C. J.; Niu, B. Z.; Liu, Y. H.; Li, N.; Ai, J. L.; Xu, G. L.

    2016-08-01

    The total length of mitochondrial DNA sequence of the Bagarius yarrelli from the Honghe river of China is determined in this paper. The total length of the circular molecule is 16524 base pair which denoted a similar gene order to that of the other bony fishes, which include a non-coding control region, a replicated origin, two ribosome RNA (rRNA) genes, 22 transfer RNA (tRNA) genes as well as 13 protein-coding genes. Its whole base constitution is 31.4% for A, 26.9% for C, 15.7% for G and 26.0% for T, with an A+T bias of 57.4%. Those mitochondrial data would contribute to further study molecular evolution and population genetics of this species.

  8. Complete sequence and gene organization of the mitochondrial genome of Asio flammeus (Strigiformes, strigidae).

    PubMed

    Zhang, Yanan; Song, Tao; Pan, Tao; Sun, Xiaonan; Sun, Zhonglou; Qian, Lifu; Zhang, Baowei

    2016-07-01

    The complete sequence of the mitochondrial genome was determined for Asio flammeus, which is distributed widely in geography. The length of the complete mitochondrial genome was 18,966 bp, containing 2 rRNA genes, 22 tRNA genes, 13 protein-coding genes (PCGs), and 1 non-coding region (D-loop). All the genes were distributed on the H-strand, except for the ND6 subunit gene and eight tRNA genes which were encoded on the L-strand. The D-loop of A. flammeus contained many tandem repeats of varying lengths and repeat numbers. The molecular-based phylogeny showed that our species acted as the sister group to A. capensis and the supported Asio was the monophyletic group.

  9. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  10. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    DOE PAGES

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.« less

  11. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  12. Do climate variables and human density affect Achatina fulica (Bowditch) (Gastropoda: Pulmonata) shell length, total weight and condition factor?

    PubMed

    Albuquerque, F S; Peso-Aguiar, M C; Assunção-Albuquerque, M J T; Gálvez, L

    2009-08-01

    The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm). The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.

  13. PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin Mohammed; Som, Sibendu

    This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less

  14. Connection anonymity analysis in coded-WDM PONs

    NASA Astrophysics Data System (ADS)

    Sue, Chuan-Ching

    2008-04-01

    A coded wavelength division multiplexing passive optical network (WDM PON) is presented for fiber to the home (FTTH) systems to protect against eavesdropping. The proposed scheme applies spectral amplitude coding (SAC) with a unipolar maximal-length sequence (M-sequence) code matrix to generate a specific signature address (coding) and to retrieve its matching address codeword (decoding) by exploiting the cyclic properties inherent in array waveguide grating (AWG) routers. In addition to ensuring the confidentiality of user data, the proposed coded-WDM scheme is also a suitable candidate for the physical layer with connection anonymity. Under the assumption that the eavesdropper applies a photo-detection strategy, it is shown that the coded WDM PON outperforms the conventional TDM PON and WDM PON schemes in terms of a higher degree of connection anonymity. Additionally, the proposed scheme allows the system operator to partition the optical network units (ONUs) into appropriate groups so as to achieve a better degree of anonymity.

  15. The Code of the Street and Romantic Relationships: A dyadic analysis

    PubMed Central

    Barr, Ashley B.; Simons, Ronald L.; Stewart, Eric A.

    2012-01-01

    Since its publication, Elijah Anderson’s (1999) code of the street thesis has found support in studies connecting disadvantage to the internalization of street-oriented values and an associated lifestyle of violent/deviant behavior. This primary emphasis on deviance in public arenas has precluded researchers from examining the implications of the code of the street for less public arenas, like intimate relationships. In an effort to understand if and how the endorsement of the street code may infiltrate such relationships, the present study examines the associations between the code of the street and relationship satisfaction and commitment among young adults involved in heterosexual romantic relationships. Using a dyadic approach, we find that street code orientation, in general, negatively predicts satisfaction and commitment, in part due to increased relationship hostility/conflict associated with the internalization of the code. Gender differences in these associations are considered and discussed at length. PMID:23504000

  16. An Assessment of the Length and Variability of Mercury's Magnetotail

    NASA Technical Reports Server (NTRS)

    Milan, S. E.; Slavin, J. A.

    2011-01-01

    We employ Mariner 10 measurements of the interplanetary magnetic field in the vicinity of Mercury to estimate the rate of magnetic reconnection between the interplanetary magnetic field and the Hermean magnetosphere. We derive a time-series of the open magnetic flux in Mercury's magnetosphere. from which we can deduce the length of the magnetotail The length of the magnetotail is shown to be highly variable. with open field lines stretching between 15R(sub H) and 8S0R(sub H) downstream of the planet (median 150R(sub H)). Scaling laws allow the tail length at perihelion to be deduced from the aphelion Mariner 10 observations.

  17. Spectroscopic analysis of Cepheid variables with 2D radiation-hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Vasilyev, Valeriy

    2018-06-01

    The analysis of chemical enrichment history of dwarf galaxies allows to derive constraints on their formation and evolution. In this context, Cepheids play a very important role, as these periodically variable stars provide a means to obtain accurate distances. Besides, chemical composition of Cepheids can provide a strong constraint on the chemical evolution of the system. Standard spectroscopic analysis of Cepheids is based on using one-dimensional (1D) hydrostatic model atmospheres, with convection parametrised using the mixing-length theory. However, this quasi-static approach has theoretically not been validated. In my talk, I will discuss the validity of the quasi-static approximation in spectroscopy of short-periodic Cepheids. I will show the results obtained using a 2D time-dependent envelope model of a pulsating star computed with the radiation-hydrodynamics code CO5BOLD. I will then describe the impact of new models on the spectroscopic diagnostic of the effective temperature, surface gravity, microturbulent velocity, and metallicity. One of the interesting findings of my work is that 1D model atmospheres provide unbiased estimates of stellar parameters and abundances of Cepheid variables for certain phases of their pulsations. Convective inhomogeneities, however, also introduce biases. I will then discuss how these results can be used in a wider parameter space of pulsating stars and present an outlook for the future studies.

  18. Outpatient Cocaine Abuse Treatment: Predictors of Success.

    ERIC Educational Resources Information Center

    Westhuis, David J.; Gwaltney, Lisa; Hayashi, Reiko

    2001-01-01

    Uses data from the U.S. Army's Alcohol and Drug Abuse Prevention and Control Program to analyze which treatment and demographic variables have an effect on cocaine treatment outcomes. Results suggest the following treatment variables had an effect on outcomes: type of treatment; length of time in treatment; and the length of time since the patient…

  19. Understanding health food messages on Twitter for health literacy promotion.

    PubMed

    Zhou, J; Liu, F; Zhou, H

    2018-05-01

    With the popularity of social media, Twitter has become an important tool to promote health literacy. However, many health-related messages on Twitter are dead-ended and cannot reach many people. This is unhelpful for health literacy promotion. This article aims to examine the features of online health food messages that people like to retweet. We adopted rumour theory as our theoretical foundation and extracted seven characteristics (i.e. emotional valence, attractiveness, sender's authoritativeness, external evidence, argument length, hashtags, and direct messages). A total of 10,025 health-related messages on Twitter were collected, and 1496 messages were randomly selected for further analysis. Each message was treated as one unit and then coded. All the hypotheses were tested with logistic regression. Emotional valence, attractiveness, sender's authoritativeness, argument length, and direct messages in a Twitter message had positive effects on people's retweet behaviour. The effect of external evidence was negative. Hashtags had no significant effect after consideration of other variables. Online health food messages containing positive emotions, including pictures, containing direct messages, having an authoritative sender, having longer arguments, or not containing external URLs are more likely to be retweeted. However, a message only containing positive or negative emotions or including direct messages without any support information will not be retweeted.

  20. Factors affecting profitability for craniotomy.

    PubMed

    Popp, A John; Scrime, Todd; Cohen, Benjamin R; Feustel, Paul J; Petronis, Karen; Habiniak, Sharon; Waldman, John B; Vosburgh, Margaret M

    2002-04-15

    The authors studied factors influencing hospital profitability after craniotomy in patients who underwent craniotomy coded as diagnosis-related group (DRG) 1 (17 years of age with nontraumatic disease without complication) and who met their hospital's craniotomy pathway criteria and had a hospital length of stay 4 days or less during a 20-month period. Data in all patients meeting these criteria (76 cases) were collected and collated from various hospital databases. Twenty-one cases were profitable and 55 were not. Variables traditionally influencing cost of care, such as surgeon, procedure, length of operation, and pharmacy use had no significant effect on whether a patient was profitable. The most important influence on profitability was the individual payor. Cases in which care was reimbursed under the prospective payment system based on DRGs were nearly always profitable whereas those covered by per diem plans were nearly always nonprofitable. 1) Hospital information systems should be customized to deliver consolidated data for timely analysis of cost of care for individual patients. This information may be useful in negotiating profitable contracts. 2) A clinical pathway was successful in reducing the difference in cost of care between profitable and nonprofitable postcraniotomy cases. 3) In today's health care environment both cost containment and revenue assume importance in determining profitability.

  1. Variation of sperm head shape and tail length in a species of Australian hydromyine rodent: the spinifex hopping mouse, Notomys alexis.

    PubMed

    Bauer, M; Breed, W G

    2006-01-01

    In Australia, there are around 60 species of murid rodents that occur in the subfamily Hydromyinae, most of which produce highly complex, monomorphic, spermatozoa in which the head has an apical hook together with two ventral processes containing filamentous actin and a long tail of species-specific length. One of the few exceptions to this is the spinifex hopping mouse, Notomys alexis, whose spermatozoa have previously been shown to have pleiomorphic heads. In this study, the structural organisation of the sperm head has been investigated in more detail and the variability in length of the midpiece and total length of the sperm tail has been determined for this species. The findings confirm that pleiomorphic sperm heads are invariably present in these animals and that this variability is associated with that of the nucleus, although nuclear vacuoles were not evident. The total length of the sperm tail, as well as that of the midpiece, was also highly variable both within, as well as between, individual animals. The reason(s) for this high degree of variability in sperm morphology is not known but it may relate to a relaxation of the genetic control of sperm form owing to depressed levels of inter-male sperm competition.

  2. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  3. Regionalization of surgical services in central Florida: the next step in acute care surgery.

    PubMed

    Block, Ernest F J; Rudloff, Beth; Noon, Charles; Behn, Bruce

    2010-09-01

    There is a national loss of access to surgeons for emergencies. Contributing factors include reduced numbers of practicing general surgeons, superspecialization, reimbursement issues, emphasis on work and life balance, and medical liability. Regionalizing acute care surgery (ACS), as exists for trauma care, represents a potential solution. The purpose of this study is to assess the financial and resources impact of transferring all nontrauma ACS cases from a community hospital (CH) to a trauma center (TC). We performed a case mix and financial analysis of patient records with ACS for a rural CH located near an urban Level I TC. ACS patients were analyzed for diagnosis, insurance status, procedures, and length of stay. We estimated physician reimbursement based on evaluation and management codes and procedural CPT codes. Hospital revenues were based on regional diagnosis-related group rates. All third-party remuneration was set at published Medicare rates; self-pay was set at nil. Nine hundred ninety patients were treated in the CH emergency department with 188 potential surgical diseases. ACS was necessary in 62 cases; 25.4% were uninsured. Extrapolated to 12 months, 248 patients would generate new TC physician revenue of >$155,000 and hospital profits of >$1.5 million. CH savings for call pay and other variable costs are >$100,000. TC operating room volume would only increase by 1%. Regionalization of ACS to TCs is a viable option from a business perspective. Access to care is preserved during an approaching crisis in emergency general surgical coverage. The referring hospital is relieved of an unfavorable payer mix and surgeon call problems. The TC receives a new revenue stream with limited impact on resources by absorbing these patients under its fixed costs, saving the CH variable costs.

  4. Palliative Care for Hospitalized Patients With Stroke: Results From the 2010 to 2012 National Inpatient Sample.

    PubMed

    Singh, Tarvinder; Peters, Steven R; Tirschwell, David L; Creutzfeldt, Claire J

    2017-09-01

    Substantial variability exists in the use of life-prolonging treatments for patients with stroke, especially near the end of life. This study explores patterns of palliative care utilization and death in hospitalized patients with stroke across the United States. Using the 2010 to 2012 nationwide inpatient sample databases, we included all patients discharged with stroke identified by International Classification of Diseases-Ninth Revision codes. Strokes were subclassified as ischemic, intracerebral, and subarachnoid hemorrhage. We compared demographics, comorbidities, procedures, and outcomes between patients with and without a palliative care encounter (PCE) as defined by the International Classification of Diseases-Ninth Revision code V66.7. Pearson χ 2 test was used for categorical variables. Multivariate logistic regression was used to account for hospital, regional, payer, and medical severity factors to predict PCE use and death. Among 395 411 patients with stroke, PCE was used in 6.2% with an increasing trend over time ( P <0.05). We found a wide range in PCE use with higher rates in patients with older age, hemorrhagic stroke types, women, and white race (all P <0.001). Smaller and for-profit hospitals saw lower rates. Overall, 9.2% of hospitalized patients with stroke died, and PCE was significantly associated with death. Length of stay in decedents was shorter for patients who received PCE. Palliative care use is increasing nationally for patients with stroke, especially in larger hospitals. Persistent disparities in PCE use and mortality exist in regards to age, sex, race, region, and hospital characteristics. Given the variations in PCE use, especially at the end of life, the use of mortality rates as a hospital quality measure is questioned. © 2017 The Authors.

  5. Experimental demonstration of localized Brillouin gratings with low off-peak reflectivity established by perfect Golomb codes.

    PubMed

    Antman, Yair; Yaron, Lior; Langer, Tomi; Tur, Moshe; Levanon, Nadav; Zadok, Avi

    2013-11-15

    Dynamic Brillouin gratings (DBGs), inscribed by comodulating two writing pump waves with a perfect Golomb code, are demonstrated and characterized experimentally. Compared with pseudo-random bit sequence (PRBS) modulation of the pump waves, the Golomb code provides lower off-peak reflectivity due to the unique properties of its cyclic autocorrelation function. Golomb-coded DBGs allow the long variable delay of one-time probe waveforms with higher signal-to-noise ratios, and without averaging. As an example, the variable delay of return-to-zero, on-off keyed data at a 1 Gbit/s rate, by as much as 10 ns, is demonstrated successfully. The eye diagram of the reflected waveform remains open, whereas PRBS modulation of the pump waves results in a closed eye. The variable delay of data at 2.5 Gbit/s is reported as well, with a marginally open eye diagram. The experimental results are in good agreement with simulations.

  6. Finite-block-length analysis in classical and quantum information theory.

    PubMed

    Hayashi, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.

  7. Finite-block-length analysis in classical and quantum information theory

    PubMed Central

    HAYASHI, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962

  8. [Relevance of long non-coding RNAs in tumour biology].

    PubMed

    Nagy, Zoltán; Szabó, Diána Rita; Zsippai, Adrienn; Falus, András; Rácz, Károly; Igaz, Péter

    2012-09-23

    The discovery of the biological relevance of non-coding RNA molecules represents one of the most significant advances in contemporary molecular biology. It has turned out that a major fraction of the non-coding part of the genome is transcribed. Beside small RNAs (including microRNAs) more and more data are disclosed concerning long non-coding RNAs of 200 nucleotides to 100 kb length that are implicated in the regulation of several basic molecular processes (cell proliferation, chromatin functioning, microRNA-mediated effects, etc.). Some of these long non-coding RNAs have been associated with human tumours, including H19, HOTAIR, MALAT1, etc., the different expression of which has been noted in various neoplasms relative to healthy tissues. Long non-coding RNAs may represent novel markers of molecular diagnostics and they might even turn out to be targets of therapeutic intervention.

  9. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  10. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  11. Are gestational age, birth weight, and birth length indicators of favorable fetal growth conditions? A structural equation analysis of Filipino infants.

    PubMed

    Bollen, Kenneth A; Noble, Mark D; Adair, Linda S

    2013-07-30

    The fetal origins hypothesis emphasizes the life-long health impacts of prenatal conditions. Birth weight, birth length, and gestational age are indicators of the fetal environment. However, these variables often have missing data and are subject to random and systematic errors caused by delays in measurement, differences in measurement instruments, and human error. With data from the Cebu (Philippines) Longitudinal Health and Nutrition Survey, we use structural equation models, to explore random and systematic errors in these birth outcome measures, to analyze how maternal characteristics relate to birth outcomes, and to take account of missing data. We assess whether birth weight, birth length, and gestational age are influenced by a single latent variable that we call favorable fetal growth conditions (FFGC) and if so, which variable is most closely related to FFGC. We find that a model with FFGC as a latent variable fits as well as a less parsimonious model that has birth weight, birth length, and gestational age as distinct individual variables. We also demonstrate that birth weight is more reliably measured than is gestational age. FFGCs were significantly influenced by taller maternal stature, better nutritional stores indexed by maternal arm fat and muscle area during pregnancy, higher birth order, avoidance of smoking, and maternal age 20-35 years. Effects of maternal characteristics on newborn weight, length, and gestational age were largely indirect, operating through FFGC. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Wide Distribution of Mitochondrial Genome Rearrangements in Wild Strains of the Cultivated Basidiomycete Agrocybe aegerita

    PubMed Central

    Barroso, G.; Blesa, S.; Labarere, J.

    1995-01-01

    We used restriction fragment length polymorphisms to examine mitochondrial genome rearrangements in 36 wild strains of the cultivated basidiomycete Agrocybe aegerita, collected from widely distributed locations in Europe. We identified two polymorphic regions within the mitochondrial DNA which varied independently: one carrying the Cox II coding sequence and the other carrying the Cox I, ATP6, and ATP8 coding sequences. Two types of mutations were responsible for the restriction fragment length polymorphisms that we observed and, accordingly, were involved in the A. aegerita mitochondrial genome evolution: (i) point mutations, which resulted in strain-specific mitochondrial markers, and (ii) length mutations due to genome rearrangements, such as deletions, insertions, or duplications. Within each polymorphic region, the length differences defined only two mitochondrial types, suggesting that these length mutations were not randomly generated but resulted from a precise rearrangement mechanism. For each of the two polymorphic regions, the two molecular types were distributed among the 36 strains without obvious correlation with their geographic origin. On the basis of these two polymorphisms, it is possible to define four mitochondrial haplotypes. The four mitochondrial haplotypes could be the result of intermolecular recombination between allelic forms present in the population long enough to reach linkage equilibrium. All of the 36 dikaryotic strains contained only a single mitochondrial type, confirming the previously described mitochondrial sorting out after cytoplasmic mixing in basidiomycetes. PMID:16534984

  13. Importance of Viral Sequence Length and Number of Variable and Informative Sites in Analysis of HIV Clustering.

    PubMed

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor; Essex, M

    2015-05-01

    To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice.

  14. Importance of Viral Sequence Length and Number of Variable and Informative Sites in Analysis of HIV Clustering

    PubMed Central

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor

    2015-01-01

    Abstract To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice. PMID:25560745

  15. Gender differences in foot shape: a study of Chinese young adults.

    PubMed

    Hong, Youlian; Wang, Lin; Xu, Dong Qing; Li, Jing Xian

    2011-06-01

    One important extrinsic factor that causes foot deformity and pain in women is footwear. Women's sports shoes are designed as smaller versions of men's shoes. Based on this, the current study aims to identify foot shape in 1,236 Chinese young adult men and 1,085 Chinese young adult women. Three-dimensional foot shape data were collected through video filming. Nineteen foot shape variables were measured, including girth (4 variables), length (4 variables), width (3 variables), height (7 variables), and angle (1 variable). A comparison of foot measures within the range of the common foot length (FL) categories indicates that women showed significantly smaller values of foot measures in width, height, and girth than men. Three foot types were classified, and distributions of different foot shapes within the same FL were found between women and men. Foot width, medial ball length, ball angle, and instep height showed significant differences among foot types in the same FL for both genders. There were differences in the foot shape between Chinese young women and men, which should be considered in the design of Chinese young adults' sports shoes.

  16. Word lengths are optimized for efficient communication.

    PubMed

    Piantadosi, Steven T; Tily, Harry; Gibson, Edward

    2011-03-01

    We demonstrate a substantial improvement on one of the most celebrated empirical laws in the study of language, Zipf's 75-y-old theory that word length is primarily determined by frequency of use. In accord with rational theories of communication, we show across 10 languages that average information content is a much better predictor of word length than frequency. This indicates that human lexicons are efficiently structured for communication by taking into account interword statistical dependencies. Lexical systems result from an optimization of communicative pressures, coding meanings efficiently given the complex statistics of natural language use.

  17. A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.

    2015-09-08

    A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less

  18. Job stress, unwinding and drinking in transit operators.

    PubMed

    Delaney, William P; Grube, Joel W; Greiner, Birgit; Fisher, June M; Ragland, David R

    2002-07-01

    This study tests the spillover model of the effects of work stress on after-work drinking, using the variable "length of time to unwind" as a mediator. A total of 1,974 transit operators were contacted and 1,553 (79%) of them participated in a personal interview. Complete data on the variables in this analysis were available for 1,208 respondents (84% men). Using latent variable structural equation modeling, a model was tested that predicted that daily job problems, skipped meals and less social support from supervisor would increase alcohol consumption through the mediator, length of time to unwind and relax after work. Increased alcohol consumption was, in turn, hypothesized to increase drinking problems. As predicted, skipped meals and daily job problems increased length of time to unwind and had an indirect positive relationship with overall drinking, even when controlling for drinking norms and demographic variables. Overall drinking was positively associated with drinking problems. Supervisor support at work, however, did not significantly influence length of time to unwind. Difficulty unwinding (longer time to unwind) did not have direct effects on drinking problems; however, indirect effects through overall drinking were observed. These results provide preliminary support for the mediating role of length of time to unwind and relax after work in a spillover model of the stress-drinking relationship. This research introduces a new mediator and empirical links between job problems, length of time to unwind, drinking and drinking problems, which ground more substantively the domains of work stress and alcohol consumption.

  19. Integrative Annotation of 21,037 Human Genes Validated by Full-Length cDNA Clones

    PubMed Central

    Imanishi, Tadashi; Itoh, Takeshi; Suzuki, Yutaka; O'Donovan, Claire; Fukuchi, Satoshi; Koyanagi, Kanako O; Barrero, Roberto A; Tamura, Takuro; Yamaguchi-Kabata, Yumi; Tanino, Motohiko; Yura, Kei; Miyazaki, Satoru; Ikeo, Kazuho; Homma, Keiichi; Kasprzyk, Arek; Nishikawa, Tetsuo; Hirakawa, Mika; Thierry-Mieg, Jean; Thierry-Mieg, Danielle; Ashurst, Jennifer; Jia, Libin; Nakao, Mitsuteru; Thomas, Michael A; Mulder, Nicola; Karavidopoulou, Youla; Jin, Lihua; Kim, Sangsoo; Yasuda, Tomohiro; Lenhard, Boris; Eveno, Eric; Suzuki, Yoshiyuki; Yamasaki, Chisato; Takeda, Jun-ichi; Gough, Craig; Hilton, Phillip; Fujii, Yasuyuki; Sakai, Hiroaki; Tanaka, Susumu; Amid, Clara; Bellgard, Matthew; Bonaldo, Maria de Fatima; Bono, Hidemasa; Bromberg, Susan K; Brookes, Anthony J; Bruford, Elspeth; Carninci, Piero; Chelala, Claude; Couillault, Christine; de Souza, Sandro J.; Debily, Marie-Anne; Devignes, Marie-Dominique; Dubchak, Inna; Endo, Toshinori; Estreicher, Anne; Eyras, Eduardo; Fukami-Kobayashi, Kaoru; R. Gopinath, Gopal; Graudens, Esther; Hahn, Yoonsoo; Han, Michael; Han, Ze-Guang; Hanada, Kousuke; Hanaoka, Hideki; Harada, Erimi; Hashimoto, Katsuyuki; Hinz, Ursula; Hirai, Momoki; Hishiki, Teruyoshi; Hopkinson, Ian; Imbeaud, Sandrine; Inoko, Hidetoshi; Kanapin, Alexander; Kaneko, Yayoi; Kasukawa, Takeya; Kelso, Janet; Kersey, Paul; Kikuno, Reiko; Kimura, Kouichi; Korn, Bernhard; Kuryshev, Vladimir; Makalowska, Izabela; Makino, Takashi; Mano, Shuhei; Mariage-Samson, Regine; Mashima, Jun; Matsuda, Hideo; Mewes, Hans-Werner; Minoshima, Shinsei; Nagai, Keiichi; Nagasaki, Hideki; Nagata, Naoki; Nigam, Rajni; Ogasawara, Osamu; Ohara, Osamu; Ohtsubo, Masafumi; Okada, Norihiro; Okido, Toshihisa; Oota, Satoshi; Ota, Motonori; Ota, Toshio; Otsuki, Tetsuji; Piatier-Tonneau, Dominique; Poustka, Annemarie; Ren, Shuang-Xi; Saitou, Naruya; Sakai, Katsunaga; Sakamoto, Shigetaka; Sakate, Ryuichi; Schupp, Ingo; Servant, Florence; Sherry, Stephen; Shiba, Rie; Shimizu, Nobuyoshi; Shimoyama, Mary; Simpson, Andrew J; Soares, Bento; Steward, Charles; Suwa, Makiko; Suzuki, Mami; Takahashi, Aiko; Tamiya, Gen; Tanaka, Hiroshi; Taylor, Todd; Terwilliger, Joseph D; Unneberg, Per; Veeramachaneni, Vamsi; Watanabe, Shinya; Wilming, Laurens; Yasuda, Norikazu; Yoo, Hyang-Sook; Stodolsky, Marvin; Makalowski, Wojciech; Go, Mitiko; Nakai, Kenta; Takagi, Toshihisa; Kanehisa, Minoru; Sakaki, Yoshiyuki; Quackenbush, John; Okazaki, Yasushi; Hayashizaki, Yoshihide; Hide, Winston; Chakraborty, Ranajit; Nishikawa, Ken; Sugawara, Hideaki; Tateno, Yoshio; Chen, Zhu; Oishi, Michio; Tonellato, Peter; Apweiler, Rolf; Okubo, Kousaku; Wagner, Lukas; Wiemann, Stefan; Strausberg, Robert L; Isogai, Takao; Auffray, Charles; Nomura, Nobuo; Sugano, Sumio

    2004-01-01

    The human genome sequence defines our inherent biological potential; the realization of the biology encoded therein requires knowledge of the function of each gene. Currently, our knowledge in this area is still limited. Several lines of investigation have been used to elucidate the structure and function of the genes in the human genome. Even so, gene prediction remains a difficult task, as the varieties of transcripts of a gene may vary to a great extent. We thus performed an exhaustive integrative characterization of 41,118 full-length cDNAs that capture the gene transcripts as complete functional cassettes, providing an unequivocal report of structural and functional diversity at the gene level. Our international collaboration has validated 21,037 human gene candidates by analysis of high-quality full-length cDNA clones through curation using unified criteria. This led to the identification of 5,155 new gene candidates. It also manifested the most reliable way to control the quality of the cDNA clones. We have developed a human gene database, called the H-Invitational Database (H-InvDB; http://www.h-invitational.jp/). It provides the following: integrative annotation of human genes, description of gene structures, details of novel alternative splicing isoforms, non-protein-coding RNAs, functional domains, subcellular localizations, metabolic pathways, predictions of protein three-dimensional structure, mapping of known single nucleotide polymorphisms (SNPs), identification of polymorphic microsatellite repeats within human genes, and comparative results with mouse full-length cDNAs. The H-InvDB analysis has shown that up to 4% of the human genome sequence (National Center for Biotechnology Information build 34 assembly) may contain misassembled or missing regions. We found that 6.5% of the human gene candidates (1,377 loci) did not have a good protein-coding open reading frame, of which 296 loci are strong candidates for non-protein-coding RNA genes. In addition, among 72,027 uniquely mapped SNPs and insertions/deletions localized within human genes, 13,215 nonsynonymous SNPs, 315 nonsense SNPs, and 452 indels occurred in coding regions. Together with 25 polymorphic microsatellite repeats present in coding regions, they may alter protein structure, causing phenotypic effects or resulting in disease. The H-InvDB platform represents a substantial contribution to resources needed for the exploration of human biology and pathology. PMID:15103394

  20. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison

    PubMed Central

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh

    2011-01-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659

  1. Analysis of National Rates, Cost, and Sources of Cost Variation in Adult Spinal Deformity.

    PubMed

    Zygourakis, Corinna C; Liu, Caterina Y; Keefe, Malla; Moriates, Christopher; Ratliff, John; Dudley, R Adams; Gonzales, Ralph; Mummaneni, Praveen V; Ames, Christopher P

    2018-03-01

    Several studies suggest significant variation in cost for spine surgery, but there has been little research in this area for spinal deformity. To determine the utilization, cost, and factors contributing to cost for spinal deformity surgery. The cohort comprised 55 599 adults who underwent spinal deformity fusion in the 2001 to 2013 National Inpatient Sample database. Patient variables included age, gender, insurance, median income of zip code, county population, severity of illness, mortality risk, number of comorbidities, length of stay, elective vs nonelective case. Hospital variables included bed size, wage index, hospital type (rural, urban nonteaching, urban teaching), and geographical region. The outcome was total hospital cost for deformity surgery. Statistics included univariate and multivariate regression analyses. The number of spinal deformity cases increased from 1803 in 2001 (rate: 4.16 per 100 000 adults) to 6728 in 2013 (rate: 13.9 per 100 000). Utilization of interbody fusion devices increased steadily during this time period, while bone morphogenic protein usage peaked in 2010 and declined thereafter. The mean inflation-adjusted case cost rose from $32 671 to $43 433 over the same time period. Multivariate analyses showed the following patient factors were associated with cost: age, race, insurance, severity of illness, length of stay, and elective admission (P < .01). Hospitals in the western United States and those with higher wage indices or smaller bed sizes were significantly more expensive (P < .05). The rate of adult spinal deformity surgery and the mean case cost increased from 2001 to 2013, exceeding the rate of inflation. Both patient and hospital factors are important contributors to cost variation for spinal deformity surgery. Copyright © 2017 by the Congress of Neurological Surgeons

  2. Arc Length Coding by Interference of Theta Frequency Oscillations May Underlie Context-Dependent Hippocampal Unit Data and Episodic Memory Function

    ERIC Educational Resources Information Center

    Hasselmo, Michael E.

    2007-01-01

    Many memory models focus on encoding of sequences by excitatory recurrent synapses in region CA3 of the hippocampus. However, data and modeling suggest an alternate mechanism for encoding of sequences in which interference between theta frequency oscillations encodes the position within a sequence based on spatial arc length or time. Arc length…

  3. Code Analysis and Refactoring with Clang Tools, Version 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, Timothy M.

    2016-12-23

    Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.

  4. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  5. Color visualization of cyclic magnitudes

    NASA Astrophysics Data System (ADS)

    Restrepo, Alfredo; Estupiñán, Viviana

    2014-02-01

    We exploit the perceptual, circular ordering of the hues in a technique for the visualization of cyclic variables. The hue is thus meaningfully used for the indication of variables such as the azimuth and the units of the measurement of time. The cyclic (or circular) variables may be both of the continuous type or the discrete type; among the first there is azimuth and among the last you find the musical notes and the days of the week. A correspondence between the values of a cyclic variable and the chromatic hues, where the natural circular ordering of the variable is respected, is called a color code for the variable. We base such a choice of hues on an assignment of of the unique hues red, yellow, green and blue, or one of the 8 even permutations of this ordered list, to 4 cardinal values of the cyclic variable, suitably ordered; color codes based on only 3 cardinal points are also possible. Color codes, being intuitive, are easy to remember. A possible low accuracy when reading instruments that use this technique is compensated by fast, ludic and intuitive readings; also, the use of a referential frame makes readings precise. An achromatic version of the technique, that can be used by dichromatic people, is proposed.

  6. Focal length hysteresis of a double-liquid lens based on electrowetting

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Wang, Dazhen; Hu, Zhiwei; Chen, Jiabi; Zhuang, Songlin

    2013-02-01

    In this paper, an extended Young equation especially suited for an ideal cylindrical double-liquid variable-focus lens is derived by means of an energy minimization method. Based on the extended Young equation, a kind of focal length hysteresis effect is introduced into the double-liquid variable-focus lens. Such an effect can be explained theoretically by adding a force of friction to the tri-phase contact line. Theoretical analysis shows that the focal length at a particular voltage can be different depending on whether the applied voltage is increasing or decreasing, that is, there is a focal length hysteresis effect. Moreover, the focal length at a particular voltage must be larger when the voltage is rising than when it is dropping. These conclusions are also verified by experiments.

  7. Relationship between balance performance in the elderly and some anthropometric variables.

    PubMed

    Fabunmi, A A; Gbiri, C A

    2008-12-01

    Ability to maintain either static or dynamic balance has been found to be influenced by many factors such as height and weight in the elderly. The relationship between other anthropometric variables and balance performance among elderly Nigerians has not been widely studied. The aim of this study was to investigate the relationship between these other anthropometric variables and balance performance among old individuals aged >60 years in Ibadan, Nigeria. The study used the ex-post facto design and involved two hundred and three apparently healthy (103 males and 100 females) elderly participants with ages between 60 years and 74 years, selected using multiple step-wise sampling techniques from churches, mosques and market place within Ibadan. They were without history of neurological problem, postural hypotension, orthopeadic conditions or injury to the back and/or upper and lower extremities within the past one year. Selected anthropometric variables were measured, Sharpened Romberg Test (SRT) and Functional Reach Test (FRT) was used to assess static balance and dynamic balance respectively. All data were summarized using range, mean and standard deviation. Pearson's product moment correlation coefficient was used to determine the relationship between the physical characteristics, anthropometric variables and performance on each of the two balance tests. The results showed that there were low but significant positive correlations between performance on FRT and each of height, weight, trunk length, foot length, shoulder girth and hip girth. (p<0.05). There was low significant and positive correlation between SRT with eyes closed and arm length, foot length and shoulder girth. (p<0.05) and there was low but significant positive correlation between SRT with eyes opened and shoulder girth and foot length (P<0.05). Anthropometric variables affect balance performances in apparently healthy elderly.

  8. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun

    2004-05-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less

  9. Exploiting the cannibalistic traits of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Collins, O.

    1993-01-01

    In Reed-Solomon codes and all other maximum distance separable codes, there is an intrinsic relationship between the size of the symbols in a codeword and the length of the codeword. Increasing the number of symbols in a codeword to improve the efficiency of the coding system thus requires using a larger set of symbols. However, long Reed-Solomon codes are difficult to implement and many communications or storage systems cannot easily accommodate an increased symbol size, e.g., M-ary frequency shift keying (FSK) and photon-counting pulse-position modulation demand a fixed symbol size. A technique for sharing redundancy among many different Reed-Solomon codewords to achieve the efficiency attainable in long Reed-Solomon codes without increasing the symbol size is described. Techniques both for calculating the performance of these new codes and for determining their encoder and decoder complexities is presented. These complexities are usually found to be substantially lower than conventional Reed-Solomon codes of similar performance.

  10. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  11. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  12. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  13. Liquid Lens module with wide field-of-view and variable focal length

    NASA Astrophysics Data System (ADS)

    Seo, Sang Won; Han, Seungoh; Seo, Jun Ho; Choi, Woo Bum; Sung, Man Young

    2010-12-01

    A novel wide angle and variable-focus imaging module based on a miniaturized liquid lens is presented for capsule endoscopy applications. For these applications, it is desirable to have features such as a wide field of view (FOV), variable focus, small size, and low power consumption, thereby taking full advantage of the miniaturized liquid lens. The proposed imaging module has three aspheric plastic lenses for a wide FOV, and one liquid lens that can change the focal length by as much as 24.5 cm with a bias voltage difference of 23 Vrms for variable focusing. The assembled lens module has an overall length of 8.4 mm and a FOV of 120.5°. The realized imaging module including the proposed lenses is small enough to be inserted into a capsule endoscope, and it is expected to improve the diagnostic capability of capsule endoscopes.

  14. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  15. Run-length encoding graphic rules, biochemically editable designs and steganographical numeric data embedment for DNA-based cryptographical coding system

    PubMed Central

    Kawano, Tomonori

    2013-01-01

    There have been a wide variety of approaches for handling the pieces of DNA as the “unplugged” tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given “passwords” and/or secret numbers using DNA sequences. The “passwords” of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original “passwords.” The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed. PMID:23750303

  16. Paraxial ray solution for liquid-filled variable focus lenses

    NASA Astrophysics Data System (ADS)

    Wang, Lihui; Oku, Hiromasa; Ishikawa, Masatoshi

    2017-12-01

    We propose a general solution for determining the cardinal points and effective focal length of a liquid-filled variable focus lens to aid in understanding the dynamic behavior of the lens when the focal length is changed. A prototype of a variable focus lens was fabricated and used to validate the solution. A simplified solution was also presented that can be used to quickly and conveniently calculate the performance of the lens. We expect that the proposed solutions will improve the design of optical systems that contain variable focus lenses, such as machine vision systems with zoom and focus functions.

  17. Automated Diagnosis Coding with Combined Text Representations.

    PubMed

    Berndorfer, Stefan; Henriksson, Aron

    2017-01-01

    Automated diagnosis coding can be provided efficiently by learning predictive models from historical data; however, discriminating between thousands of codes while allowing a variable number of codes to be assigned is extremely difficult. Here, we explore various text representations and classification models for assigning ICD-9 codes to discharge summaries in MIMIC-III. It is shown that the relative effectiveness of the investigated representations depends on the frequency of the diagnosis code under consideration and that the best performance is obtained by combining models built using different representations.

  18. Wing Weight Optimization Under Aeroelastic Loads Subject to Stress Constraints

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Issac, J.; Macmurdy, D.; Guruswamy, Guru P.

    1997-01-01

    A minimum weight optimization of the wing under aeroelastic loads subject to stress constraints is carried out. The loads for the optimization are based on aeroelastic trim. The design variables are the thickness of the wing skins and planform variables. The composite plate structural model incorporates first-order shear deformation theory, the wing deflections are expressed using Chebyshev polynomials and a Rayleigh-Ritz procedure is adopted for the structural formulation. The aerodynamic pressures provided by the aerodynamic code at a discrete number of grid points is represented as a bilinear distribution on the composite plate code to solve for the deflections and stresses in the wing. The lifting-surface aerodynamic code FAST is presently being used to generate the pressure distribution over the wing. The envisioned ENSAERO/Plate is an aeroelastic analysis code which combines ENSAERO version 3.0 (for analysis of wing-body configurations) with the composite plate code.

  19. Addition of equilibrium air to an upwind Navier-Stokes code and other first steps toward a more generalized flow solver

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1991-01-01

    An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.

  20. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  1. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  2. Palindromic repetitive DNA elements with coding potential in Methanocaldococcus jannaschii.

    PubMed

    Suyama, Mikita; Lathe, Warren C; Bork, Peer

    2005-10-10

    We have identified 141 novel palindromic repetitive elements in the genome of euryarchaeon Methanocaldococcus jannaschii. The total length of these elements is 14.3kb, which corresponds to 0.9% of the total genomic sequence and 6.3% of all extragenic regions. The elements can be divided into three groups (MJRE1-3) based on the sequence similarity. The low sequence identity within each of the groups suggests rather old origin of these elements in M. jannaschii. Three MJRE2 elements were located within the protein coding regions without disrupting the coding potential of the host genes, indicating that insertion of repeats might be a widespread mechanism to enhance sequence diversity in coding regions.

  3. Using QR codes to enable quick access to information in acute cancer care.

    PubMed

    Upton, Joanne; Olsson-Brown, Anna; Marshall, Ernie; Sacco, Joseph

    2017-05-25

    Quick access to toxicity management information ensures timely access to steroids/immunosuppressive treatment for cancer patients experiencing immune-related adverse events, thus reducing length of hospital stays or avoiding hospital admission entirely. This article discusses a project to add a QR (quick response) code to a patient-held immunotherapy alert card. As QR code generation is free and the immunotherapy clinical management algorithms were already publicly available through the trust's clinical network website, the costs of integrating a QR code into the alert card, after printing, were low, while the potential benefits are numerous. Patient-held alert cards are widely used for patients receiving anti-cancer treatment, and this established standard of care has been modified to enable rapid access of information through the incorporation of a QR code.

  4. Concurrent error detecting codes for arithmetic processors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.

  5. Digital plus analog output encoder

    NASA Technical Reports Server (NTRS)

    Hafle, R. S. (Inventor)

    1976-01-01

    The disclosed encoder is adapted to produce both digital and analog output signals corresponding to the angular position of a rotary shaft, or the position of any other movable member. The digital signals comprise a series of binary signals constituting a multidigit code word which defines the angular position of the shaft with a degree of resolution which depends upon the number of digits in the code word. The basic binary signals are produced by photocells actuated by a series of binary tracks on a code disc or member. The analog signals are in the form of a series of ramp signals which are related in length to the least significant bit of the digital code word. The analog signals are derived from sine and cosine tracks on the code disc.

  6. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  7. Novel exon 1 protein-coding regions N-terminally extend human KCNE3 and KCNE4.

    PubMed

    Abbott, Geoffrey W

    2016-08-01

    The 5 human (h)KCNE β subunits each regulate various cation channels and are linked to inherited cardiac arrhythmias. Reported here are previously undiscovered protein-coding regions in exon 1 of hKCNE3 and hKCNE4 that extend their encoded extracellular domains by 44 and 51 residues, which yields full-length proteins of 147 and 221 residues, respectively. Full-length hKCNE3 and hKCNE4 transcript and protein are expressed in multiple human tissues; for hKCNE4, only the longer protein isoform is detectable. Two-electrode voltage-clamp electrophysiology revealed that, when coexpressed in Xenopus laevis oocytes with various potassium channels, the newly discovered segment preserved conversion of KCNQ1 by hKCNE3 to a constitutively open channel, but prevented its inhibition of Kv4.2 and KCNQ4. hKCNE4 slowing of Kv4.2 inactivation and positive-shifted steady-state inactivation were also preserved in the longer form. In contrast, full-length hKCNE4 inhibition of KCNQ1 was limited to 40% at +40 mV vs. 80% inhibition by the shorter form, and augmentation of KCNQ4 activity by hKCNE4 was entirely abolished by the additional segment. Among the genome databases analyzed, the longer KCNE3 is confined to primates; full-length KCNE4 is widespread in vertebrates but is notably absent from Mus musculus Findings highlight unexpected KCNE gene diversity, raise the possibility of dynamic regulation of KCNE partner modulation via splice variation, and suggest that the longer hKCNE3 and hKCNE4 proteins should be adopted in future mechanistic and genetic screening studies.-Abbott, G. W. Novel exon 1 protein-coding regions N-terminally extend human KCNE3 and KCNE4. © FASEB.

  8. A software program to measure the three-dimensional length of the spine from radiographic images: Validation and reliability assessment for adolescent idiopathic scoliosis.

    PubMed

    Berger, Steve; Hasler, Carol-Claudius; Grant, Caroline A; Zheng, Guoyan; Schumann, Steffen; Büchler, Philippe

    2017-01-01

    The aim of this study was to validate a new program which aims at measuring the three-dimensional length of the spine's midline based on two calibrated orthogonal radiographic images. The traditional uniplanar T1-S1 measurement method is not reflecting the actual three dimensional curvature of a scoliotic spine and is therefore not accurate. The Spinal Measurement Software (SMS) is an alternative to conveniently measure the true spine's length. The validity, inter- and intra-observer variability and usability of the program were evaluated. The usability was quantified based on a subjective questionnaire filled by eight participants using the program for the first time. The validity and variability were assessed by comparing the length of five phantom spines measured based on CT-scan data and on radiographic images with the SMS. The lengths were measured independently by each participant using both techniques. The SMS is easy and intuitive to use, even for non-clinicians. The SMS measured spinal length with an error below 2 millimeters compared to length obtained using CT scan datasets. The inter- and intra-observer variability of the SMS measurements was below 5 millimeters. The SMS provides accurate measurement of the spinal length based on orthogonal radiographic images. The software is easy to use and could easily integrate the clinical workflow and replace current approximations of the spinal length based on a single radiographic image such as the traditional T1-S1 measurement. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.

    PubMed

    Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David

    2016-08-01

    To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement additional OCT measurements for diagnosis of glaucoma.

  10. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  11. Midupper arm circumference and weight-for-length z scores have different associations with body composition: evidence from a cohort of Ethiopian infants.

    PubMed

    Grijalva-Eternod, Carlos S; Wells, Jonathan C K; Girma, Tsinuel; Kæstel, Pernille; Admassu, Bitiya; Friis, Henrik; Andersen, Gregers S

    2015-09-01

    A midupper arm circumference (MUAC) <115 mm and weight-for-height z score (WHZ) or weight-for-length z score (WLZ) less than -3, all of which are recommended to identify severe wasting in children, often identify different children. The reasons behind this poor agreement are not well understood. We investigated the association between these 2 anthropometric indexes and body composition to help understand why they identify different children as wasted. We analyzed weight, length, MUAC, fat-mass (FM), and fat-free mass (FFM) data from 2470 measurements from 595 healthy Ethiopian infants obtained at birth and at 1.5, 2.5, 3.5, 4.5, and 6 mo of age. We derived WLZs by using 2006 WHO growth standards. We derived length-adjusted FM and FFM values as unexplained residuals after regressing each FM and FFM against length. We used a correlation analysis to assess associations between length, FFM, and FM (adjusted and nonadjusted for length) and the MUAC and WLZ and a multivariable regression analysis to assess the independent variability of length and length-adjusted FM and FFM with either the MUAC or the WLZ as the outcome. At all ages, length showed consistently strong positive correlations with the MUAC but not with the WLZ. Adjustment for length reduced observed correlation coefficients of FM and FFM with the MUAC but increased those for the WLZ. At all ages, both length-adjusted FM and FFM showed an independent association with the WLZ and MUAC with higher regression coefficients for the WLZ. Conversely, length showed greater regression coefficients for the MUAC. At all ages, the MUAC was shown to be more influenced than was the WLZ by the FM variability relative to the FFM variability. The MUAC and WLZ have different associations with body composition, and length influences these associations differently. Our results suggest that the WLZ is a good marker of tissue masses independent of length. The MUAC acts more as a composite index of poor growth indexing jointly tissue masses and length. This trial was registered at www.controlled-trials.com as ISRCTN46718296. © 2015 American Society for Nutrition.

  12. Quasi-experimental study designs series-paper 9: collecting data from quasi-experimental studies.

    PubMed

    Aloe, Ariel M; Becker, Betsy Jane; Duvendack, Maren; Valentine, Jeffrey C; Shemilt, Ian; Waddington, Hugh

    2017-09-01

    To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. All quasi-experimental (QE) designs. When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Fast-response variable focusing micromirror array lens

    NASA Astrophysics Data System (ADS)

    Boyd, James G., IV; Cho, Gyoungil

    2003-07-01

    A reflective type Fresnel lens using an array of micromirrors is designed and fabricated using the MUMPs® surface micromachining process. The focal length of the lens can be rapidly changed by controlling both the rotation and translation of electrostatically actuated micromirrors. The rotation converges rays and the translation adjusts the optical path length difference of the rays to be integer multiples of the wavelength. The suspension spring, pedestal and electrodes are located under the mirror to maximize the optical efficiency. Relations are provided for the fill-factor and the numerical aperture as functions of the lens diameter, the mirror size, and the tolerances specified by the MUMPs® design rules. The fabricated lens is 1.8mm in diameter, and each micromirror is approximately 100mm x 100mm. The lens fill-factor is 83.7%, the numerical aperture is 0.018 for a wavelength of 632.8nm, and the resolution is approximately 22mm, whereas the resolution of a perfect aberration-free lens is 21.4μm for a NA of 0.018. The focal length ranges from 11.3mm to infinity. The simulated Strehl ratio, which is the ratio of the point spread function maximum intensity to the theoretical diffraction-limited PSF maximum intensity, is 31.2%. A mechanical analysis was performed using the finite element code IDEAS. The combined maximum rotation and translation produces a maximum stress of 301MPa, below the yield strength of polysilicon, 1.21 to 1.65GPa. Potential applications include adaptive microscope lenses for scanning particle imaging velocimetry and a visually aided micro-assembly.

  14. Crystal diffraction lens with variable focal length

    DOEpatents

    Smither, R.K.

    1991-04-02

    A method and apparatus for altering the focal length of a focusing element of one of a plurality of pre-determined focal lengths by changing heat transfer within selected portions of the element by controlled quantities is disclosed. Control over heat transfer is accomplished by manipulating one or more of a number of variables, including: the amount of heat or cold applied to surfaces; type of fluids pumped through channels for heating and cooling; temperatures, directions of flow and rates of flow of fluids; and placement of channels. 19 figures.

  15. Implications of Supermarket Access, Neighborhood Walkability, and Poverty Rates for Diabetes Risk in an Employee Population

    PubMed Central

    Herrick, Cynthia J.; Yount, Byron W.; Eyler, Amy A.

    2016-01-01

    Objective Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of this study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. Design This was a retrospective cross-sectional analysis. Home environment variables were derived using employee zip code. Descriptive statistics were run on all individual and zip code level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Setting Data was collected from employee health fairs in a Midwestern health system 2009–2012. Subjects The dataset contains 25,227 unique individuals across four years of data. From this group, using an individual’s first entry into the database, 15,522 individuals had complete data for analysis. Results The prevalence of high diabetes risk in this population was 2.3%. There was significant variability in individual and zip code level variables across worksites. From the multivariable analysis, living in a zip code with higher percent poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Conclusions Our study underscores the important relationship between poverty, home neighborhood environment, and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health. PMID:26638995

  16. Implications of supermarket access, neighbourhood walkability and poverty rates for diabetes risk in an employee population.

    PubMed

    Herrick, Cynthia J; Yount, Byron W; Eyler, Amy A

    2016-08-01

    Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of the present study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. This was a retrospective cross-sectional analysis. Home environment variables were derived using employees' zip code. Descriptive statistics were run on all individual- and zip-code-level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Data were collected from employee health fairs in a Midwestern health system, 2009-2012. The data set contains 25 227 unique individuals across four years of data. From this group, using an individual's first entry into the database, 15 522 individuals had complete data for analysis. The prevalence of high diabetes risk in this population was 2·3 %. There was significant variability in individual- and zip-code-level variables across worksites. From the multivariable analysis, living in a zip code with higher percentage of poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Our study underscores the important relationship between poverty, home neighbourhood environment and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health.

  17. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  18. Simple tool for the rapid, automated quantification of glacier advance/retreat observations using multiple methods

    NASA Astrophysics Data System (ADS)

    Lea, J.

    2017-12-01

    The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating its accuracy, while feedback from undergraduate students who have trialled the toolbox is that it is intuitive and simple to use. When released, the toolbox will be free and open source allowing users to potentially modify, improve and expand upon the current version.

  19. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.

    2017-11-01

    This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  20. Overlooking the obvious: a meta-analytic comparison of digit symbol coding tasks and other cognitive measures in schizophrenia.

    PubMed

    Dickinson, Dwight; Ramsey, Mary E; Gold, James M

    2007-05-01

    In focusing on potentially localizable cognitive impairments, the schizophrenia meta-analytic literature has overlooked the largest single impairment: on digit symbol coding tasks. To compare the magnitude of the schizophrenia impairment on coding tasks with impairments on other traditional neuropsychological instruments. MEDLINE and PsycINFO electronic databases and reference lists from identified articles. English-language studies from 1990 to present, comparing performance of patients with schizophrenia and healthy controls on coding tasks and cognitive measures representing at least 2 other cognitive domains. Of 182 studies identified, 40 met all criteria for inclusion in the meta-analysis. Means, standard deviations, and sample sizes were extracted for digit symbol coding and 36 other cognitive variables. In addition, we recorded potential clinical moderator variables, including chronicity/severity, medication status, age, and education, and potential study design moderators, including coding task variant, matching, and study publication date. Main analyses synthesized data from 37 studies comprising 1961 patients with schizophrenia and 1444 comparison subjects. Combination of mean effect sizes across studies by means of a random effects model yielded a weighted mean effect for digit symbol coding of g = -1.57 (95% confidence interval, -1.66 to -1.48). This effect compared with a grand mean effect of g = -0.98 and was significantly larger than effects for widely used measures of episodic memory, executive functioning, and working memory. Moderator variable analyses indicated that clinical and study design differences between studies had little effect on the coding task effect. Comparison with previous meta-analyses suggested that current results were representative of the broader literature. Subsidiary analysis of data from relatives of patients with schizophrenia also suggested prominent coding task impairments in this group. The 5-minute digit symbol coding task, reliable and easy to administer, taps an information processing inefficiency that is a central feature of the cognitive deficit in schizophrenia and deserves systematic investigation.

  1. The Use of Color-Coded Genograms in Family Therapy.

    ERIC Educational Resources Information Center

    Lewis, Karen Gail

    1989-01-01

    Describes a variable color-coding system which has been added to the standard family genogram in which characteristics or issues associated with a particular presenting problem or for a particular family are arbitrarily assigned a color. Presents advantages of color-coding, followed by clinical examples. (Author/ABL)

  2. New features in the design code Tlie

    NASA Astrophysics Data System (ADS)

    van Zeijts, Johannes

    1993-12-01

    We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.

  3. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  4. A fully decompressed synthetic bacteriophage øX174 genome assembled and archived in yeast.

    PubMed

    Jaschke, Paul R; Lieberman, Erica K; Rodriguez, Jon; Sierra, Adrian; Endy, Drew

    2012-12-20

    The 5386 nucleotide bacteriophage øX174 genome has a complicated architecture that encodes 11 gene products via overlapping protein coding sequences spanning multiple reading frames. We designed a 6302 nucleotide synthetic surrogate, øX174.1, that fully separates all primary phage protein coding sequences along with cognate translation control elements. To specify øX174.1f, a decompressed genome the same length as wild type, we truncated the gene F coding sequence. We synthesized DNA encoding fragments of øX174.1f and used a combination of in vitro- and yeast-based assembly to produce yeast vectors encoding natural or designer bacteriophage genomes. We isolated clonal preparations of yeast plasmid DNA and transfected E. coli C strains. We recovered viable øX174 particles containing the øX174.1f genome from E. coli C strains that independently express full-length gene F. We expect that yeast can serve as a genomic 'drydock' within which to maintain and manipulate clonal lineages of other obligate lytic phage. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Regression Analysis with Dummy Variables: Use and Interpretation.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Oliver, J. Dale

    1986-01-01

    Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)

  6. Moral Waivers and Suitability for High Security Military Jobs

    DTIC Science & Technology

    1988-12-01

    Score 9 High School Diploma 10 Service Entry Variables 11 Months in the Delayed Entry Program (DEP) 11 DoD Primary Occupation Code ( DPOC ) 14...services 6 4. DoD Primary Occupational ( DPOC ) Areas 14 5. Cumulative percentage of personnel who received Bis during first six months of...Entry Variables - Months in the Delayed Entry Program (DEP) - DoD Primary Occupation Code ( DPOC ) Clearance Criteria - Issue case - Clearance status

  7. Layered video transmission over multirate DS-CDMA wireless systems

    NASA Astrophysics Data System (ADS)

    Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.

    2003-05-01

    n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.

  8. Does Receiving a Blood Transfusion Predict for Length of Stay in Children Undergoing Cranial Vault Remodeling for Craniosynostosis? Outcomes Using the Pediatric National Surgical Quality Improvement Program Dataset.

    PubMed

    Markiewicz, Michael R; Alden, Tord; Momin, Mohmed Vasim; Olsson, Alexis B; Jurado, Ray J; Abdullah, Fizan; Miloro, Michael

    2017-08-01

    Recent interventions have aimed at reducing the need for blood transfusions in the perioperative period in patients with craniosynostosis undergoing cranial vault remodeling. However, little is known regarding whether the receipt of a blood transfusion influences the length of hospital stay. The purpose of this study was to assess whether the receipt of a blood transfusion in patients undergoing cranial vault remodeling is associated with an increased length of stay. To address the research purposes, we designed a retrospective cohort study using the 2014 Pediatric National Surgical Quality Improvement Program (NSQIP Peds) dataset. The primary predictor variable was whether patients received a blood transfusion during cranial vault remodeling. The primary outcome variable was length of hospital stay after the operation. The association between the receipt of blood transfusions and length of stay was assessed using the Student t test. The association between other covariates and the outcome variable was assessed using linear regression, analysis of variance, and the Tukey test for post hoc pair-wise comparisons. The sample was composed of 756 patients who underwent cranial vault remodeling: 503 who received blood transfusions and 253 who did not. The primary predictor variable of blood transfusion was associated with an increased length of stay (4.1 days vs 3.0 days, P = .03). Other covariates associated with an increased length of stay included race, American Society of Anesthesiologists status, premature birth, presence of a congenital malformation, and number of sutures involved in craniosynostosis. The receipt of a blood transfusion in the perioperative period in patients with craniosynostosis undergoing cranial vault remodeling was associated with an increased length of stay. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Convergence of leaf display and photosynthetic characteristics of understory Abies amabilis and Tsuga Heterophylla in an old-growth forest in southwestern Washington State, USA

    Treesearch

    Hiroaki Ishii; Ken-Ichi Yoshimura; Akira Mori

    2009-01-01

    The branching pattern of A. amabilis was regular (normal shoot-length distribution, less variable branching angle and bifurcation ratio), whereas that of T. heterophylla was more plastic (positively skewed shoot-length distribution, more variable branching angle and bifurcation ratio). The two species had similar shoot...

  10. Does Foot Anthropometry Predict Metabolic Cost During Running?

    PubMed

    van Werkhoven, Herman; Piazza, Stephen J

    2017-10-01

    Several recent investigations have linked running economy to heel length, with shorter heels being associated with less metabolic energy consumption. It has been hypothesized that shorter heels require larger plantar flexor muscle forces, thus increasing tendon energy storage and reducing metabolic cost. The goal of this study was to investigate this possible mechanism for metabolic cost reduction. Fifteen male subjects ran at 16 km⋅h -1 on a treadmill and subsequently on a force-plate instrumented runway. Measurements of oxygen consumption, kinematics, and ground reaction forces were collected. Correlational analyses were performed between oxygen consumption and anthropometric and kinetic variables associated with the ankle and foot. Correlations were also computed between kinetic variables (peak joint moment and peak tendon force) and heel length. Estimated peak Achilles tendon force normalized to body weight was found to be strongly correlated with heel length normalized to body height (r = -.751, p = .003). Neither heel length nor any other measured or calculated variable were correlated with oxygen consumption, however. Subjects with shorter heels experienced larger Achilles tendon forces, but these forces were not associated with reduced metabolic cost. No other anthropometric and kinetic variables considered explained the variance in metabolic cost across individuals.

  11. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  12. Hourly simulation of a Ground-Coupled Heat Pump system

    NASA Astrophysics Data System (ADS)

    Naldi, C.; Zanchini, E.

    2017-01-01

    In this paper, we present a MATLAB code for the hourly simulation of a whole Ground-Coupled Heat Pump (GCHP) system, based on the g-functions previously obtained by Zanchini and Lazzari. The code applies both to on-off heat pumps and to inverter-driven ones. It is employed to analyse the effects of the inverter and of the total length of the Borehole Heat Exchanger (BHE) field on the mean seasonal COP (SCOP) and on the mean seasonal EER (SEER) of a GCHP system designed for a residential house with 6 apartments in Bologna, North-Center Italy, with dominant heating loads. A BHE field with 3 in line boreholes is considered, with length of each BHE either 75 m or 105 m. The results show that the increase of the BHE length yields a SCOP enhancement of about 7%, while the SEER remains nearly unchanged. The replacement of the on-off heat pump by an inverter-driven one yields a SCOP enhancement of about 30% and a SEER enhancement of about 50%. The results demonstrate the importance of employing inverter-driven heat pumps for GCHP systems.

  13. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  14. Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies.

    PubMed

    Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M; Widge, Alik S; Dougherty, Darin D; Bonmassar, Giorgio; Angelone, Leonardo M; Wald, Lawrence L

    2018-05-04

    We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a 'virtual CT' image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg -1 (full model, helicoidal conductors) to 43.6 kW kg -1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg -1 (full model, straight conductors) to 73.8 kW kg -1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.

  15. Realistic modeling of deep brain stimulation implants for electromagnetic MRI safety studies

    NASA Astrophysics Data System (ADS)

    Guerin, Bastien; Serano, Peter; Iacono, Maria Ida; Herrington, Todd M.; Widge, Alik S.; Dougherty, Darin D.; Bonmassar, Giorgio; Angelone, Leonardo M.; Wald, Lawrence L.

    2018-05-01

    We propose a framework for electromagnetic (EM) simulation of deep brain stimulation (DBS) patients in radiofrequency (RF) coils. We generated a model of a DBS patient using post-operative head and neck computed tomography (CT) images stitched together into a ‘virtual CT’ image covering the entire length of the implant. The body was modeled as homogeneous. The implant path extracted from the CT data contained self-intersections, which we corrected automatically using an optimization procedure. Using the CT-derived DBS path, we built a model of the implant including electrodes, helicoidal internal conductor wires, loops, extension cables, and the implanted pulse generator. We also built four simplified models with straight wires, no extension cables and no loops to assess the impact of these simplifications on safety predictions. We simulated EM fields induced by the RF birdcage body coil in the body model, including at the DBS lead tip at both 1.5 Tesla (64 MHz) and 3 Tesla (123 MHz). We also assessed the robustness of our simulation results by systematically varying the EM properties of the body model and the position and length of the DBS implant (sensitivity analysis). The topology correction algorithm corrected all self-intersection and curvature violations of the initial path while introducing minimal deformations (open-source code available at http://ptx.martinos.org/index.php/Main_Page). The unaveraged lead-tip peak SAR predicted by the five DBS models (0.1 mm resolution grid) ranged from 12.8 kW kg‑1 (full model, helicoidal conductors) to 43.6 kW kg‑1 (no loops, straight conductors) at 1.5 T (3.4-fold variation) and 18.6 kW kg‑1 (full model, straight conductors) to 73.8 kW kg‑1 (no loops, straight conductors) at 3 T (4.0-fold variation). At 1.5 T and 3 T, the variability of lead-tip peak SAR with respect to the conductivity ranged between 18% and 30%. Variability with respect to the position and length of the DBS implant ranged between 9.5% and 27.6%.

  16. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  17. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  18. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  19. Estimating moisture content of tree-length roundwood

    Treesearch

    Alexander Clark; Richard F. Daniels

    2000-01-01

    The green weight of southern pine tree-length roundwood delivered to the pulp mill is generally known. However, for optimum mill efficiency it is desirable to know dry weight. The moisture content of tree-length pine logs is quite variable. The moisture content of pine tree-length logs increases significantly with increasing stem height. Moisture content also varies...

  20. The complete chloroplast genome sequence of Hibiscus syriacus.

    PubMed

    Kwon, Hae-Yun; Kim, Joon-Hyeok; Kim, Sea-Hyun; Park, Ji-Min; Lee, Hyoshin

    2016-09-01

    The complete chloroplast genome sequence of Hibiscus syriacus L. is presented in this study. The genome is composed of 161 019 bp in length, with a typical circular structure containing a pair of inverted repeats of 25 745 bp of length separated by a large single-copy region and a small single-copy region of 89 698 bp and 19 831 bp of length, respectively. The overall GC content is 36.8%. One hundred and fourteen genes were annotated, including 81 protein-coding genes, 4 ribosomal RNA genes and 29 transfer RNA genes.

  1. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies

    PubMed Central

    Lammert, Adam C.; Narayanan, Shrikanth S.

    2015-01-01

    Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102

  2. Application of Advanced Concepts and Techniques in Electromagnetic Topology Based Simulations: CRIPTE and Related Codes

    DTIC Science & Technology

    2008-12-01

    multiconductor transmission line theory. The per-unit capacitance, inductance , and characteristic impedance matrices generated from the companion LAPLACE...code based on the Method of Moments application, by meshing different sections of the multiconductor cable for capacitance and inductance matrices [21...conductors held together in four pairs and resided in the cable jacket. Each of eight conductors was also designed with the per unit length resistance

  3. Method and apparatus for a single channel digital communications system. [synchronization of received PCM signal by digital correlation with reference signal

    NASA Technical Reports Server (NTRS)

    Couvillon, L. A., Jr.; Carl, C.; Goldstein, R. M.; Posner, E. C.; Green, R. R. (Inventor)

    1973-01-01

    A method and apparatus are described for synchronizing a received PCM communications signal without requiring a separate synchronizing channel. The technique provides digital correlation of the received signal with a reference signal, first with its unmodulated subcarrier and then with a bit sync code modulated subcarrier, where the code sequence length is equal in duration to each data bit.

  4. Does polyandry really pay off? The effects of multiple mating and number of fathers on morphological traits and survival in clutches of nesting green turtles at Tortuguero.

    PubMed

    Alfaro-Núñez, Alonzo; Jensen, Michael P; Abreu-Grobois, F Alberto

    2015-01-01

    Despite the long debate of whether or not multiple mating benefits the offspring, studies still show contradictory results. Multiple mating takes time and energy. Thus, if females fertilize their eggs with a single mating, why to mate more than once? We investigated and inferred paternal identity and number of sires in 12 clutches (240 hatchlings) of green turtles (Chelonia mydas) nests at Tortuguero, Costa Rica. Paternal alleles were inferred through comparison of maternal and hatchling genotypes, and indicated multiple paternity in at least 11 of the clutches (92%). The inferred average number of fathers was three (ranging from 1 to 5). Moreover, regression analyses were used to investigate for correlation of inferred clutch paternity with morphological traits of hatchlings fitness (emergence success, length, weight and crawling speed), the size of the mother, and an environmental variable (incubation temperature). We suggest and propose two different comparative approaches for evaluating morphological traits and clutch paternity, in order to infer greater offspring survival. First, clutches coded by the exact number of fathers and second by the exact paternal contribution (fathers who gives greater proportion of the offspring per nest). We found significant differences (P < 0.05) in clutches coded by the exact number of fathers for all morphological traits. A general tendency of higher values in offspring sired by two to three fathers was observed for the length and weight traits. However, emergence success and crawling speed showed different trends which unable us to reach any further conclusion. The second approach analysing the paternal contribution showed no significant difference (P > 0.05) for any of the traits. We conclude that multiple paternity does not provide any extra benefit in the morphological fitness traits or the survival of the offspring, when analysed following the proposed comparative statistical methods.

  5. Convolutional code performance in planetary entry channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.

    1974-01-01

    The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.

  6. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model--Documentation of the SEAWAT-2000 Version with the Variable-Density Flow Process (VDF) and the Integrated MT3DMS Transport Process (IMT)

    USGS Publications Warehouse

    Langevin, Christian D.; Shoemaker, W. Barclay; Guo, Weixing

    2003-01-01

    SEAWAT-2000 is the latest release of the SEAWAT computer program for simulation of three-dimensional, variable-density, transient ground-water flow in porous media. SEAWAT-2000 was designed by combining a modified version of MODFLOW-2000 and MT3DMS into a single computer program. The code was developed using the MODFLOW-2000 concept of a process, which is defined as ?part of the code that solves a fundamental equation by a specified numerical method.? SEAWAT-2000 contains all of the processes distributed with MODFLOW-2000 and also includes the Variable-Density Flow Process (as an alternative to the constant-density Ground-Water Flow Process) and the Integrated MT3DMS Transport Process. Processes may be active or inactive, depending on simulation objectives; however, not all processes are compatible. For example, the Sensitivity and Parameter Estimation Processes are not compatible with the Variable-Density Flow and Integrated MT3DMS Transport Processes. The SEAWAT-2000 computer code was tested with the common variable-density benchmark problems and also with problems representing evaporation from a salt lake and rotation of immiscible fluids.

  7. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  8. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (The BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.

  9. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  10. Phase and vortex correlations in superconducting Josephson-junction arrays at irrational magnetic frustration.

    PubMed

    Granato, Enzo

    2008-07-11

    Phase coherence and vortex order in a Josephson-junction array at irrational frustration are studied by extensive Monte Carlo simulations using the parallel-tempering method. A scaling analysis of the correlation length of phase variables in the full equilibrated system shows that the critical temperature vanishes with a power-law divergent correlation length and critical exponent nuph, in agreement with recent results from resistivity scaling analysis. A similar scaling analysis for vortex variables reveals a different critical exponent nuv, suggesting that there are two distinct correlation lengths associated with a decoupled zero-temperature phase transition.

  11. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  12. The complete mitochondrial genome of Gryllotalpa unispina Saussure, 1874 (Orthoptera: Gryllotalpoidea: Gryllotalpidae).

    PubMed

    Zhang, Yulong; Shao, Dandan; Cai, Miao; Yin, Hong; Zhang, Daochuan

    2016-01-01

    The complete mitochondrial genome of Gryllotalpa unispina was 15,513 bp in length and contained 70.9% AT. All G. unispina protein-coding sequences except for the nad2 started with a typical ATN codon. The usual termination codons (TAA) and incomplete stop codons (T) were found from 13 protein-coding genes. All tRNA genes were folded into the typical cloverleaf secondary structure, except trnS(AGN) lacking the dihydrouridine arm. The sizes of the large and small ribosomal RNA genes were 1245 and 725 bp, respectively. The A + T-rich region was 917 bp in length with 76.8%. The orientation and gene order of the G. unispina mitogenome were identical to the G. orientalis and G. pluvialis, there was no phenomenon of "DK rearrangement" which has been widely reported in Caelifera.

  13. Multifunction audio digitizer for communications systems

    NASA Technical Reports Server (NTRS)

    Monford, L. G., Jr.

    1971-01-01

    Digitizer accomplishes both N bit pulse code modulation /PCM/ and delta modulation, and provides modulation indicating variable signal gain and variable sidetone. Other features include - low package count, variable clock rate to optimize bandwidth, and easily expanded PCM output.

  14. Ovine mitochondrial DNA sequence variation and its association with production and reproduction traits within an Afec-Assaf flock.

    PubMed

    Reicher, S; Seroussi, E; Weller, J I; Rosov, A; Gootwine, E

    2012-07-01

    Polymorphisms in mitochondrial DNA (mtDNA) protein- and tRNA-coding genes were shown to be associated with various diseases in humans as well as with production and reproduction traits in livestock. Alignment of full length mitochondria sequences from the 5 known ovine haplogroups: HA (n = 3), HB (n = 5), HC (n = 3), HD (n = 2), and HE (n = 2; GenBank accession nos. HE577847-50 and 11 published complete ovine mitochondria sequences) revealed sequence variation in 10 out of the 13 protein coding mtDNA sequences. Twenty-six of the 245 variable sites found in the protein coding sequences represent non-synonymous mutations. Sequence variation was observed also in 8 out of the 22 tRNA mtDNA sequences. On the basis of the mtDNA control region and cytochrome b partial sequences along with information on maternal lineages within an Afec-Assaf flock, 1,126 Afec-Assaf ewes were assigned to mitochondrial haplogroups HA, HB, and HC, with frequencies of 0.43, 0.43, and 0.14, respectively. Analysis of birth weight and growth rate records of lamb (n = 1286) and productivity from 4,993 lambing records revealed no association between mitochondrial haplogroup affiliation and female longevity, lambs perinatal survival rate, birth weight, and daily growth rate of lambs up to 150 d that averaged 1,664 d, 88.3%, 4.5 kg, and 320 g/d, respectively. However, significant (P < 0.0001) differences among the haplogroups were found for prolificacy of ewes, with prolificacies (mean ± SE) of 2.14 ± 0.04, 2.25 ± 0.04, and 2.30 ± 0.06 lamb born/ewe lambing for the HA, HB, and the HC haplogroups, respectively. Our results highlight the ovine mitogenome genetic variation in protein- and tRNA coding genes and suggest that sequence variation in ovine mtDNA is associated with variation in ewe prolificacy.

  15. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  16. Effect of altering starting length and activation timing of muscle on fiber strain and muscle damage.

    PubMed

    Butterfield, Timothy A; Herzog, Walter

    2006-05-01

    Muscle strain injuries are some of the most frequent injuries in sports and command a great deal of attention in an effort to understand their etiology. These injuries may be the culmination of a series of subcellular events accumulated through repetitive lengthening (eccentric) contractions during exercise, and they may be influenced by a variety of variables including fiber strain magnitude, peak joint torque, and starting muscle length. To assess the influence of these variables on muscle injury magnitude in vivo, we measured fiber dynamics and joint torque production during repeated stretch-shortening cycles in the rabbit tibialis anterior muscle, at short and long muscle lengths, while varying the timing of activation before muscle stretch. We found that a muscle subjected to repeated stretch-shortening cycles of constant muscle-tendon unit excursion exhibits significantly different joint torque and fiber strains when the timing of activation or starting muscle length is changed. In particular, measures of fiber strain and muscle injury were significantly increased by altering activation timing and increasing the starting length of the muscle. However, we observed differential effects on peak joint torque during the cyclic stretch-shortening exercise, as increasing the starting length of the muscle did not increase torque production. We conclude that altering activation timing and muscle length before stretch may influence muscle injury by significantly increasing fiber strain magnitude and that fiber dynamics is a more important variable than muscle-tendon unit dynamics and torque production in influencing the magnitude of muscle injury.

  17. Comparative sacral morphology and the reconstructed tail lengths of five extinct primates: Proconsul heseloni, Epipliopithecus vindobonensis, Archaeolemur edwardsi, Megaladapis grandidieri, and Palaeopropithecus kelyus.

    PubMed

    Russo, Gabrielle A

    2016-01-01

    This study evaluated the relationship between the morphology of the sacrum-the sole bony link between the tail or coccyx and the rest of the body-and tail length (including presence/absence) and function using a comparative sample of extant mammals spanning six orders (Primates, Carnivora, Rodentia, Diprotodontia, Pilosa, Scandentia; N = 472). Phylogenetically-informed regression methods were used to assess how tail length varied with respect to 11 external and internal (i.e., trabecular) bony sacral variables with known or suspected biomechanical significance across all mammals, only primates, and only non-primates. Sacral variables were also evaluated for primates assigned to tail categories ('tailless,' 'nonprehensile short-tailed,' 'nonprehensile long-tailed,' and 'prehensile-tailed'). Compared to primates with reduced tail lengths, primates with longer tails generally exhibited sacra having larger caudal neural openings than cranial neural openings, and last sacral vertebrae with more mediolaterally-expanded caudal articular surfaces than cranial articular surfaces, more laterally-expanded transverse processes, more dorsally-projecting spinous processes, and larger caudal articular surface areas. Observations were corroborated by the comparative sample, which showed that shorter-tailed (e.g., Lynx rufus [bobcat]) and longer-tailed (e.g., Acinonyx jubatus [cheetah]) non-primate mammals morphologically converge with shorter-tailed (e.g., Macaca nemestrina) and longer-tailed (e.g., Macaca fascicularis) primates, respectively. 'Prehensile-tailed' primates exhibited last sacral vertebrae with more laterally-expanded transverse processes and greater caudal articular surface areas than 'nonprehensile long-tailed' primates. Internal sacral variables performed poorly compared to external sacral variables in analyses of extant primates, and were thus deemed less useful for making inferences concerning tail length and function in extinct primates. The tails lengths of five extinct primates were reconstructed from the external sacral variables: Archaeolemur edwardsi had a 'nonprehensile long tail,' Megaladapis grandidieri, Palaeopropithecus kelyus, and Epipliopithecus vindobonensis probably had 'nonprehensile short tails,' and Proconsul heseloni was 'tailless.' Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Lossless Compression of Data into Fixed-Length Packets

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2009-01-01

    A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.

  19. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    PubMed

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  20. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  1. Complications Following Common Inpatient Urological Procedures: Temporal Trend Analysis from 2000 to 2010.

    PubMed

    Meyer, Christian P; Hollis, Michael; Cole, Alexander P; Hanske, Julian; O'Leary, James; Gupta, Soham; Löppenberg, Björn; Zavaski, Mike E; Sun, Maxine; Sammon, Jesse D; Kibel, Adam S; Fisch, Margit; Chun, Felix K H; Trinh, Quoc-Dien

    2016-04-01

    Measuring procedure-specific complication-rate trends allows for benchmarking and improvement in quality of care but must be done in a standardized fashion. Using the Nationwide Inpatient Sample, we identified all instances of eight common inpatient urologic procedures performed in the United States between 2000 and 2010. This yielded 327218 cases including both oncologic and benign diseases. Complications were identified by International Classification of Diseases, Ninth Revision codes. Each complication was cross-referenced to the procedure code and graded according to the standardized Clavien system. The Mann-Whitney and chi-square were used to assess the statistical significance of medians and proportions, respectively. We assessed temporal variability in the rates of overall complications (Clavien grade 1-4), length of hospital stay, and in-hospital mortality using the estimated annual percent change (EAPC) linear regression methodology. We observed an overall reduction in length of stay (EAPC: -1.59; p<0.001), whereas mortality rates remained negligible and unchanged (EAPC: -0.32; p=0.83). Patient comorbidities increased significantly over the study period (EAPC: 2.09; p<0.001), as did the rates of complications. Procedure-specific trends showed a significant increase in complications for inpatient ureterorenoscopy (EAPC: 5.53; p<0.001), percutaneous nephrolithotomy (EAPC: 3.75; p<0.001), radical cystectomy (EAPC: 1.37; p<0.001), radical nephrectomy (EAPC: 1.35; p<0.001), and partial nephrectomy (EAPC: 1.22; p=0.006). Limitations include lack of postdischarge follow-up data, lack of pathologic characteristics, and inability to adjust for secular changes in administrative coding. In the context of urologic care in the United States, our findings suggest a shift toward more complex oncologic procedures in the inpatient setting, with same-day procedures most likely shifted to the outpatient setting. Consequently, complications have increased for the majority of examined procedures; however, no change in mortality was found. This report evaluated the trends of urologic procedures and their complications. A significant shift toward sicker patients and more complex procedures in the inpatient setting was found, but this did not result in higher mortality. These results are indicators of the high quality of care for urologic procedures in the inpatient setting. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  2. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  3. Ultrasound strain imaging using Barker code

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  4. LUR models for particulate matters in the Taipei metropolis with high densities of roads and strong activities of industry, commerce and construction.

    PubMed

    Lee, Jui-Huna; Wu, Chang-Fu; Hoek, Gerard; de Hoogh, Kees; Beelen, Rob; Brunekreef, Bert; Chan, Chang-Chuan

    2015-05-01

    Traffic intensity, length of road, and proximity to roads are the most common traffic indicators in the land use regression (LUR) models for particulate matter in ESCAPE study areas in Europe. This study explored what local variables can improve the performance of LUR models in an Asian metropolis with high densities of roads and strong activities of industry, commerce and construction. By following the ESCAPE procedure, we derived LUR models of PM₂.₅, PM₂.₅ absorbance, PM₁₀, and PMcoarse (PM₂.₅-₁₀) in Taipei. The overall annual average concentrations of PM₂.₅, PM₁₀, and PMcoarse were 26.0 ± 5.6, 48.6 ± 5.9, and 23.3 ± 3.1 μg/m(3), respectively, and the absorption coefficient of PM₂.₅ was 2.0 ± 0.4 × 10(-5)m(-1). Our LUR models yielded R(2) values of 95%, 96%, 87%, and 65% for PM₂.₅, PM₂.₅ absorbance, PM₁₀, and PMcoarse, respectively. PM₂.₅ levels were increased by local traffic variables, industrial, construction, and residential land-use variables and decreased by rivers; while PM₂.₅ absorbance levels were increased by local traffic variables, industrial, and commercial land-use variables in the models. PMcoarse levels were increased by elevated highways. Road area explained more variance than road length by increasing the incremental value of 27% and 6% adjusted R(2) for PM₂.₅ and PM₁₀ models, respectively. In the PM₂.₅ absorbance model, road area and transportation facility explain 29% more variance than road length. In the PMcoarse model, industrial and new local variables instead of road length improved the incremental value of adjusted R(2) from 39% to 60%. We concluded that road area can better explain the spatial distribution of PM₂.₅ and PM₂.₅ absorbance concentrations than road length. By incorporating road area and other new local variables, the performance of each PM LUR model was improved. The results suggest that road area is a better indicator of traffic intensity rather than road length in a city with high density of road network and traffic. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  6. Linked Records of Children with Traumatic Brain Injury. Probabilistic Linkage without Use of Protected Health Information.

    PubMed

    Bennett, T D; Dean, J M; Keenan, H T; McGlincy, M H; Thomas, A M; Cook, L J

    2015-01-01

    Record linkage may create powerful datasets with which investigators can conduct comparative effectiveness studies evaluating the impact of tests or interventions on health. All linkages of health care data files to date have used protected health information (PHI) in their linkage variables. A technique to link datasets without using PHI would be advantageous both to preserve privacy and to increase the number of potential linkages. We applied probabilistic linkage to records of injured children in the National Trauma Data Bank (NTDB, N = 156,357) and the Pediatric Health Information Systems (PHIS, N = 104,049) databases from 2007 to 2010. 49 match variables without PHI were used, many of them administrative variables and indicators for procedures recorded as International Classification of Diseases, 9th revision, Clinical Modification codes. We validated the accuracy of the linkage using identified data from a single center that submits to both databases. We accurately linked the PHIS and NTDB records for 69% of children with any injury, and 88% of those with severe traumatic brain injury eligible for a study of intervention effectiveness (positive predictive value of 98%, specificity of 99.99%). Accurate linkage was associated with longer lengths of stay, more severe injuries, and multiple injuries. In populations with substantial illness or injury severity, accurate record linkage may be possible in the absence of PHI. This methodology may enable linkages and, in turn, comparative effectiveness studies that would be unlikely or impossible otherwise.

  7. Realizing the Translational Potential of Telomere Length Variation as a Tissue-Based Prognostic Marker for Prostate Cancer

    DTIC Science & Technology

    2014-10-01

    Telomere Length Variation as a Tissue- Based Prognostic Marker for Prostate Cancer PRINCIPAL INVESTIGATOR: Elizabeth A. Platz CONTRACTING...Translational Potential of Telomere Length Variation as a Tissue- Based Prognostic Marker for Prostate Cancer 5b. GRANT NUMBER W81XWH-12-1-0545 5c...combination of telomere length variability in prostate cancer cells and short telomere length in cancer-associated stromal cells is an independent

  8. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  9. Statistical modelling for precision agriculture: A case study in optimal environmental schedules for Agaricus Bisporus production via variable domain functional regression.

    PubMed

    Panayi, Efstathios; Peters, Gareth W; Kyriakides, George

    2017-01-01

    Quantifying the effects of environmental factors over the duration of the growing process on Agaricus Bisporus (button mushroom) yields has been difficult, as common functional data analysis approaches require fixed length functional data. The data available from commercial growers, however, is of variable duration, due to commercial considerations. We employ a recently proposed regression technique termed Variable-Domain Functional Regression in order to be able to accommodate these irregular-length datasets. In this way, we are able to quantify the contribution of covariates such as temperature, humidity and water spraying volumes across the growing process, and for different lengths of growing processes. Our results indicate that optimal oxygen and temperature levels vary across the growing cycle and we propose environmental schedules for these covariates to optimise overall yields.

  10. Statistical modelling for precision agriculture: A case study in optimal environmental schedules for Agaricus Bisporus production via variable domain functional regression

    PubMed Central

    Panayi, Efstathios; Kyriakides, George

    2017-01-01

    Quantifying the effects of environmental factors over the duration of the growing process on Agaricus Bisporus (button mushroom) yields has been difficult, as common functional data analysis approaches require fixed length functional data. The data available from commercial growers, however, is of variable duration, due to commercial considerations. We employ a recently proposed regression technique termed Variable-Domain Functional Regression in order to be able to accommodate these irregular-length datasets. In this way, we are able to quantify the contribution of covariates such as temperature, humidity and water spraying volumes across the growing process, and for different lengths of growing processes. Our results indicate that optimal oxygen and temperature levels vary across the growing cycle and we propose environmental schedules for these covariates to optimise overall yields. PMID:28961254

  11. The Effects of Music Salience on the Gait Performance of Young Adults.

    PubMed

    de Bruin, Natalie; Kempster, Cody; Doucette, Angelica; Doan, Jon B; Hu, Bin; Brown, Lesley A

    2015-01-01

    The presence of a rhythmic beat in the form of a metronome tone or beat-accentuated original music can modulate gait performance; however, it has yet to be determined whether gait modulation can be achieved using commercially available music. The current study investigated the effects of commercially available music on the walking of healthy young adults. Specific aims were (a) to determine whether commercially available music can be used to influence gait (i.e., gait velocity, stride length, cadence, stride time variability), (b) to establish the effect of music salience on gait (i.e., gait velocity, stride length, cadence, stride time variability), and (c) to examine whether music tempi differentially effected gait (i.e., gait velocity, stride length, cadence, stride time variability). Twenty-five participants walked the length of an unobstructed walkway while listening to music. Music selections differed with respect to the salience or the tempo of the music. The genre of music and artists were self-selected by participants. Listening to music while walking was an enjoyable activity that influenced gait. Specifically, salient music selections increased measures of cadence, velocity, and stride length; in contrast, gait was unaltered by the presence of non-salient music. Music tempo did not differentially affect gait performance (gait velocity, stride length, cadence, stride time variability) in these participants. Gait performance was differentially influenced by music salience. These results have implications for clinicians considering the use of commercially available music as an alternative to the traditional rhythmic auditory cues used in rehabilitation programs. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Role of a Dual Splicing and Amino Acid Code in Myopia, Cone Dysfunction and Cone Dystrophy Associated with L/M Opsin Interchange Mutations

    PubMed Central

    Greenwald, Scott H.; Kuchenbecker, James A.; Rowlan, Jessica S.; Neitz, Jay; Neitz, Maureen

    2017-01-01

    Purpose Human long (L) and middle (M) wavelength cone opsin genes are highly variable due to intermixing. Two L/M cone opsin interchange mutants, designated LIAVA and LVAVA, are associated with clinical diagnoses, including red-green color vision deficiency, blue cone monochromacy, cone degeneration, myopia, and Bornholm Eye Disease. Because the protein and splicing codes are carried by the same nucleotides, intermixing L and M genes can cause disease by affecting protein structure and splicing. Methods Genetically engineered mice were created to allow investigation of the consequences of altered protein structure alone, and the effects on cone morphology were examined using immunohistochemistry. In humans and mice, cone function was evaluated using the electroretinogram (ERG) under L/M- or short (S) wavelength cone isolating conditions. Effects of LIAVA and LVAVA genes on splicing were evaluated using a minigene assay. Results ERGs and histology in mice revealed protein toxicity for the LVAVA but not for the LIAVA opsin. Minigene assays showed that the dominant messenger RNA (mRNA) was aberrantly spliced for both variants; however, the LVAVA gene produced a small but significant amount of full-length mRNA and LVAVA subjects had correspondingly reduced ERG amplitudes. In contrast, the LIAVA subject had no L/M cone ERG. Conclusions Dramatic differences in phenotype can result from seemingly minor differences in genotype through divergent effects on the dual amino acid and splicing codes. Translational Relevance The mechanism by which individual mutations contribute to clinical phenotypes provides valuable information for diagnosis and prognosis of vision disorders associated with L/M interchange mutations, and it informs strategies for developing therapies. PMID:28516000

  13. Data linkage of inpatient hospitalization and workers' claims data sets to characterize occupational falls.

    PubMed

    Bunn, Terry L; Slavova, Svetla; Bathke, Arne

    2007-07-01

    The identification of industry, occupation, and associated injury costs for worker falls in Kentucky have not been fully examined. The purpose of this study was to determine the associations between industry and occupation and 1) hospitalization length of stay; 2) hospitalization charges; and 3) workers' claims costs in workers suffering falls, using linked inpatient hospitalization discharge and workers' claims data sets. Hospitalization cases were selected with ICD-9-CM external cause of injury codes for falls and payer code of workers' claims for years 2000-2004. Selection criteria for workers'claims cases were International Association of Industrial Accident Boards and Commissions Electronic Data Interchange Nature (IAIABCEDIN) injuries coded as falls and/or slips. Common data variables between the two data sets such as date of birth, gender, date of injury, and hospital admission date were used to perform probabilistic data linkage using LinkSolv software. Statistical analysis was performed with non-parametric tests. Construction falls were the most prevalent for male workers and incurred the highest hospitalization and workers' compensation costs, whereas most female worker falls occurred in the services industry. The largest percentage of male worker falls was from one level to another, while the largest percentage of females experienced a fall, slip, or trip (not otherwise classified). When male construction worker falls were further analyzed, laborers and helpers had longer hospital stays as well as higher total charges when the worker fell from one level to another. Data linkage of hospitalization and workers' claims falls data provides additional information on industry, occupation, and costs that are not available when examining either data set alone.

  14. Characterization and phylogenetic analysis of the swine leukocyte antigen 3 gene from Korean native pigs.

    PubMed

    Chung, H Y; Choi, Y C; Park, H N

    2015-05-18

    We investigated the phylogenetic relationships between pig breeds, compared the genetic similarity between humans and pigs, and provided basic genetic information on Korean native pigs (KNPs), using genetic variants of the swine leukocyte antigen 3 (SLA-3) gene. Primers were based on sequences from GenBank (accession Nos. AF464010 and AF464009). Polymerase chain reaction analysis amplified approximately 1727 bp of segments, which contained 1086 bp of coding regions and 641 bp of the 3'- and 5'-untranslated regions. Bacterial artificial chromosome clones of miniature pigs were used for sequencing the SLA-3 genomic region, which was 3114 bp in total length, including the coding (1086 bp) and non-coding (2028 bp) regions. Sequence analysis detected 53 single nucleotide polymorphisms (SNPs), based on a minor allele frequency greater than 0.01, which is low compared with other pig breeds, and the results suggest that there is low genetic variability in KNPs. Comparative analysis revealed that humans possess approximately three times more genetic variation than do pigs. Approximately 71% of SNPs in exons 2 and 3 were detected in KNPs, and exon 5 in humans is a highly polymorphic region. Newly identified sequences of SLA-3 using KNPs were submitted to GenBank (accession No. DQ992512-18). Cluster analysis revealed that KNPs were grouped according to three major alleles: SLA-3*0502 (DQ992518), SLA-3*0302 (DQ992513 and DQ992516), and SLA-3*0303 (DQ992512, DQ992514, DQ992515, and DQ992517). Alignments revealed that humans have a relatively close genetic relationship with pigs and chimpanzees. The information provided by this study may be useful in KNP management.

  15. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  16. Disposition of elderly patients after head and neck reconstruction.

    PubMed

    Hatcher, Jeanne L; Bell, Elizabeth Bradford; Browne, J Dale; Waltonen, Joshua D

    2013-11-01

    A patient's needs at discharge, particularly the need for nursing facility placement, may affect hospital length of stay and health care costs. The association between age and disposition after microvascular reconstruction of the head and neck has yet to be reported in the literature. To determine whether elderly patients are more likely to be discharged to a nursing or other care facility as opposed to returning home after microvascular reconstruction of the head and neck. From January 1, 2001, through December 31, 2010, patients undergoing microvascular reconstruction at an academic medical center were identified and their medical records systematically reviewed. During the study period, 457 patients were identified by Current Procedural Terminology codes for microvascular free tissue transfer for a head and neck defect regardless of cause. Seven patients were excluded for inadequate data on the postoperative disposition or American Society of Anesthesiologists (ASA) score. A total of 450 were included for analysis. Demographic and surgical data were collected, including the patient age, ASA score, and postoperative length of stay. These variables were then compared between groups of patients discharged to different posthospitalization care facilities. The mean age of participants was 59.1 years. Most patients (n = 386 [85.8%]) were discharged home with or without home health services. The mean age of those discharged home was 57.5 years; discharge to home was the reference for comparison and odds ratio (OR) calculation. For those discharged to a skilled nursing facility, mean age was 67.1 years (OR, 1.055; P < .001). Mean age of those discharged to a long-term acute care facility was 71.5 years (OR, 1.092; P = .002). Length of stay also affected the disposition to a skilled nursing facility (OR, 1.098), as did the ASA score (OR, 2.988). Elderly patients are less likely to be discharged home after free flap reconstruction. Age, ASA score, and length of stay are independent factors for discharge to a nursing or other care facility.

  17. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    The tether control law to retrieve the satellite was modified in order to have a smooth retrieval trajectory of the satellite that minimizes the thruster activation. The satellite thrusters were added to the rotational dynamics computer code and a preliminary control logic was implemented to simulate them during the retrieval maneuver. The high resolution computer code for modelling the three dimensional dynamics of untensioned tether, SLACK3, was made fully operative and a set of computer simulations of possible tether breakages was run. The distribution of the electric field around an electrodynamic tether in vacuo severed at some length from the shuttle was computed with a three dimensional electrodynamic computer code.

  18. Analysis of thermo-chemical nonequilibrium models for carbon dioxide flows

    NASA Technical Reports Server (NTRS)

    Rock, Stacey G.; Candler, Graham V.; Hornung, Hans G.

    1992-01-01

    The aerothermodynamics of thermochemical nonequilibrium carbon dioxide flows is studied. The chemical kinetics models of McKenzie and Park are implemented in separate three-dimensional computational fluid dynamics codes. The codes incorporate a five-species gas model characterized by a translational-rotational and a vibrational temperature. Solutions are obtained for flow over finite length elliptical and circular cylinders. The computed flowfields are then employed to calculate Mach-Zehnder interferograms for comparison with experimental data. The accuracy of the chemical kinetics models is determined through this comparison. Also, the methodology of the three-dimensional thermochemical nonequilibrium code is verified by the reproduction of the experiments.

  19. Exploring item and higher order factor structure with the Schmid-Leiman solution: syntax codes for SPSS and SAS.

    PubMed

    Wolff, Hans-Georg; Preising, Katja

    2005-02-01

    To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.

  20. Formation of Electrostatic Potential Drops in the Auroral Zone

    NASA Technical Reports Server (NTRS)

    Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.

    2001-01-01

    In order to examine the self-consistent formation of large-scale quasi-static parallel electric fields in the auroral zone on a micro/meso scale, a particle in cell simulation has been developed. The code resolves electron Debye length scales so that electron micro-processes are included and a variable grid scheme is used such that the overall length scale of the simulation is of the order of an Earth radii along the magnetic field. The simulation is electrostatic and includes the magnetic mirror force, as well as two types of plasmas, a cold dense ionospheric plasma and a warm tenuous magnetospheric plasma. In order to study the formation of parallel electric fields in the auroral zone, different magnetospheric ion and electron inflow boundary conditions are used to drive the system. It has been found that for conditions in the primary (upward) current region an upward directed quasi-static electric field can form across the system due to magnetic mirroring of the magnetospheric ions and electrons at different altitudes. For conditions in the return (downward) current region it is shown that a quasi-static parallel electric field in the opposite sense of that in the primary current region is formed, i.e., the parallel electric field is directed earthward. The conditions for how these different electric fields can be formed are discussed using satellite observations and numerical simulations.

  1. Periodic Overload and Transport Spectrum Fatigue Crack Growth Tests of Ti62222STA and Al2024T3 Sheet

    NASA Technical Reports Server (NTRS)

    Phillips, Edward P.

    1999-01-01

    Variable amplitude loading crack growth tests have been conducted to provide data that can be used to evaluate crack growth prediction codes. Tests with periodic overloads or overloads followed by underloads were conducted on titanium alloy Ti-6Al-2Sn-2Zr-2Mo-2Cr solution treated and aged (Ti62222STA) material at room temperature and at 350 F. Spectrum fatigue crack growth tests were conducted on two materials (Ti62222STA and aluminum alloy 2024-T3) using two transport lower-wing test spectra at two temperatures (room temperature and 350 F (Ti only)). Test lives (growth from an initial crack half-length of 0.15 in. to failure) were recorded in all tests and the crack length against cycles (or flights) data were recorded in many of the tests. The following observations were made regarding the test results: (1) in tests of the Ti62222STA material, the tests at 350 F had longer lives than those at room temperature, (2) in tests to the MiniTwist spectrum, the Al2024T3 material showed much greater crack growth retardations due to the highest stresses in the spectrum than did the Ti62222STA material, and (3) comparisons of material crack growth performances on an "equal weight" basis were spectrum dependent.

  2. Alternative Splicing Profile and Sex-Preferential Gene Expression in the Female and Male Pacific Abalone Haliotis discus hannai.

    PubMed

    Kim, Mi Ae; Rhee, Jae-Sung; Kim, Tae Ha; Lee, Jung Sick; Choi, Ah-Young; Choi, Beom-Soon; Choi, Ik-Young; Sohn, Young Chang

    2017-03-09

    In order to characterize the female or male transcriptome of the Pacific abalone and further increase genomic resources, we sequenced the mRNA of full-length complementary DNA (cDNA) libraries derived from pooled tissues of female and male Haliotis discus hannai by employing the Iso-Seq protocol of the PacBio RSII platform. We successfully assembled whole full-length cDNA sequences and constructed a transcriptome database that included isoform information. After clustering, a total of 15,110 and 12,145 genes that coded for proteins were identified in female and male abalones, respectively. A total of 13,057 putative orthologs were retained from each transcriptome in abalones. Overall Gene Ontology terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways analyzed in each database showed a similar composition between sexes. In addition, a total of 519 and 391 isoforms were genome-widely identified with at least two isoforms from female and male transcriptome databases. We found that the number of isoforms and their alternatively spliced patterns are variable and sex-dependent. This information represents the first significant contribution to sex-preferential genomic resources of the Pacific abalone. The availability of whole female and male transcriptome database and their isoform information will be useful to improve our understanding of molecular responses and also for the analysis of population dynamics in the Pacific abalone.

  3. Alternative Splicing Profile and Sex-Preferential Gene Expression in the Female and Male Pacific Abalone Haliotis discus hannai

    PubMed Central

    Kim, Mi Ae; Rhee, Jae-Sung; Kim, Tae Ha; Lee, Jung Sick; Choi, Ah-Young; Choi, Beom-Soon; Choi, Ik-Young; Sohn, Young Chang

    2017-01-01

    In order to characterize the female or male transcriptome of the Pacific abalone and further increase genomic resources, we sequenced the mRNA of full-length complementary DNA (cDNA) libraries derived from pooled tissues of female and male Haliotis discus hannai by employing the Iso-Seq protocol of the PacBio RSII platform. We successfully assembled whole full-length cDNA sequences and constructed a transcriptome database that included isoform information. After clustering, a total of 15,110 and 12,145 genes that coded for proteins were identified in female and male abalones, respectively. A total of 13,057 putative orthologs were retained from each transcriptome in abalones. Overall Gene Ontology terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways analyzed in each database showed a similar composition between sexes. In addition, a total of 519 and 391 isoforms were genome-widely identified with at least two isoforms from female and male transcriptome databases. We found that the number of isoforms and their alternatively spliced patterns are variable and sex-dependent. This information represents the first significant contribution to sex-preferential genomic resources of the Pacific abalone. The availability of whole female and male transcriptome database and their isoform information will be useful to improve our understanding of molecular responses and also for the analysis of population dynamics in the Pacific abalone. PMID:28282934

  4. Complete sequence of Tvv1, a family of Ty 1 copia-like retrotransposons of Vitis vinifera L., reconstituted by chromosome walking.

    PubMed

    Pelsy, F.; Merdinoglu, D.

    2002-09-01

    A chromosome-walking strategy was used to sequence and characterize retrotransposons in the grapevine genome. The reconstitution of a family of retroelements, named Tvv1, was achieved by six successive steps. These elements share a single, highly conserved open reading frame 4,153 nucleotides-long, putatively encoding the gag, pro, int, rt and rh proteins. Comparison of the Tvv1 open reading frame coding potential with those of drosophila copia and tobacco Tnt1, revealed that Tvv1 is closely related to Ty 1 copia-like retrotransposons. A highly variable untranslated leader region, upstream of the open reading frame, allowed us to differentiate Tvv1 variants, which represent a family of at least 28 copies, in varying sizes. This internal region is flanked by two long terminal repeats in direct orientation, sized between 149 and 157 bp. Among elements theoretically sized from 4,970 to 5,550 bp, we describe the full-length sequence of a reference element Tvv1-1, 5,343 nucleotides-long. The full-length sequence of Tvv1-1 compared to pea PDR1 shows a 53.3% identity. In addition, both elements contain long terminal repeats of nearly the same size in which the U5 region could be entirely absent. Therefore, we assume that Tvv1 and PDR1 could constitute a particular class of short LTRs retroelements.

  5. ScaffoldSeq: Software for characterization of directed evolution populations.

    PubMed

    Woldring, Daniel R; Holec, Patrick V; Hackel, Benjamin J

    2016-07-01

    ScaffoldSeq is software designed for the numerous applications-including directed evolution analysis-in which a user generates a population of DNA sequences encoding for partially diverse proteins with related functions and would like to characterize the single site and pairwise amino acid frequencies across the population. A common scenario for enzyme maturation, antibody screening, and alternative scaffold engineering involves naïve and evolved populations that contain diversified regions, varying in both sequence and length, within a conserved framework. Analyzing the diversified regions of such populations is facilitated by high-throughput sequencing platforms; however, length variability within these regions (e.g., antibody CDRs) encumbers the alignment process. To overcome this challenge, the ScaffoldSeq algorithm takes advantage of conserved framework sequences to quickly identify diverse regions. Beyond this, unintended biases in sequence frequency are generated throughout the experimental workflow required to evolve and isolate clones of interest prior to DNA sequencing. ScaffoldSeq software uniquely handles this issue by providing tools to quantify and remove background sequences, cluster similar protein families, and dampen the impact of dominant clones. The software produces graphical and tabular summaries for each region of interest, allowing users to evaluate diversity in a site-specific manner as well as identify epistatic pairwise interactions. The code and detailed information are freely available at http://research.cems.umn.edu/hackel. Proteins 2016; 84:869-874. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Coherent communication with continuous quantum variables

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.; Krovi, Hari; Brun, Todd A.

    2007-06-01

    The coherent bit (cobit) channel is a resource intermediate between classical and quantum communication. It produces coherent versions of teleportation and superdense coding. We extend the cobit channel to continuous variables by providing a definition of the coherent nat (conat) channel. We construct several coherent protocols that use both a position-quadrature and a momentum-quadrature conat channel with finite squeezing. Finally, we show that the quality of squeezing diminishes through successive compositions of coherent teleportation and superdense coding.

  7. Anonymous broadcasting of classical information with a continuous-variable topological quantum code

    NASA Astrophysics Data System (ADS)

    Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.

    2018-03-01

    Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.

  8. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    ERIC Educational Resources Information Center

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  9. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  10. The complete mitochondrial genome of the Giant Manta ray, Manta birostris.

    PubMed

    Hinojosa-Alvarez, Silvia; Díaz-Jaimes, Pindaro; Marcet-Houben, Marina; Gabaldón, Toni

    2015-01-01

    The complete mitochondrial genome of the giant manta ray (Manta birostris), consists of 18,075 bp with rich A + T and low G content. Gene organization and length is similar to other species of ray. It comprises of 13 protein-coding genes, 2 rRNAs genes, 23 tRNAs genes and 1 non-coding sequence, and the control region. We identified an AT tandem repeat region, similar to that reported in Mobula japanica.

  11. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  12. Orthopedics coding and funding.

    PubMed

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  13. Improvement of genome assembly completeness and identification of novel full-length protein-coding genes by RNA-seq in the giant panda genome.

    PubMed

    Chen, Meili; Hu, Yibo; Liu, Jingxing; Wu, Qi; Zhang, Chenglin; Yu, Jun; Xiao, Jingfa; Wei, Fuwen; Wu, Jiayan

    2015-12-11

    High-quality and complete gene models are the basis of whole genome analyses. The giant panda (Ailuropoda melanoleuca) genome was the first genome sequenced on the basis of solely short reads, but the genome annotation had lacked the support of transcriptomic evidence. In this study, we applied RNA-seq to globally improve the genome assembly completeness and to detect novel expressed transcripts in 12 tissues from giant pandas, by using a transcriptome reconstruction strategy that combined reference-based and de novo methods. Several aspects of genome assembly completeness in the transcribed regions were effectively improved by the de novo assembled transcripts, including genome scaffolding, the detection of small-size assembly errors, the extension of scaffold/contig boundaries, and gap closure. Through expression and homology validation, we detected three groups of novel full-length protein-coding genes. A total of 12.62% of the novel protein-coding genes were validated by proteomic data. GO annotation analysis showed that some of the novel protein-coding genes were involved in pigmentation, anatomical structure formation and reproduction, which might be related to the development and evolution of the black-white pelage, pseudo-thumb and delayed embryonic implantation of giant pandas. The updated genome annotation will help further giant panda studies from both structural and functional perspectives.

  14. Morphometric Analysis of Recognized Genes for Autism Spectrum Disorders and Obesity in Relationship to the Distribution of Protein-Coding Genes on Human Chromosomes.

    PubMed

    McGuire, Austen B; Rafi, Syed K; Manzardo, Ann M; Butler, Merlin G

    2016-05-05

    Mammalian chromosomes are comprised of complex chromatin architecture with the specific assembly and configuration of each chromosome influencing gene expression and function in yet undefined ways by varying degrees of heterochromatinization that result in Giemsa (G) negative euchromatic (light) bands and G-positive heterochromatic (dark) bands. We carried out morphometric measurements of high-resolution chromosome ideograms for the first time to characterize the total euchromatic and heterochromatic chromosome band length, distribution and localization of 20,145 known protein-coding genes, 790 recognized autism spectrum disorder (ASD) genes and 365 obesity genes. The individual lengths of G-negative euchromatin and G-positive heterochromatin chromosome bands were measured in millimeters and recorded from scaled and stacked digital images of 850-band high-resolution ideograms supplied by the International Society of Chromosome Nomenclature (ISCN) 2013. Our overall measurements followed established banding patterns based on chromosome size. G-negative euchromatic band regions contained 60% of protein-coding genes while the remaining 40% were distributed across the four heterochromatic dark band sub-types. ASD genes were disproportionately overrepresented in the darker heterochromatic sub-bands, while the obesity gene distribution pattern did not significantly differ from protein-coding genes. Our study supports recent trends implicating genes located in heterochromatin regions playing a role in biological processes including neurodevelopment and function, specifically genes associated with ASD.

  15. Emission response from extended length, variable geometry gas turbine combustor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troth, D.L.; Verdouw, A.J.; Tomlinson, J.G.

    1974-01-01

    A program to analyze, select, and experimentally evaluate low emission combustors for aircraft gas turbine engines is conducted to demonstrate a final combustor concept having a 50 percent reduction in total mass emissions (carbon monoxide, unburnt hydrocarbons, oxides of nitrogen, and exhaust smoke) without an increase in any specific pollutant. Research conducted under an Army Contract established design concepts demonstrating significant reductions in CO and UHC emissions. Two of these concepts were an extended length intermediate zone to consume CO and UHC and variable geometry to control the primary zone fuel air ratio over varying power conditions. Emission reduction featuresmore » were identified by analytical methods employing both reaction kinetics and empirical correlations. Experimental results were obtained on a T63 component combustor rig operating at conditions simulating the engine over the complete power operating range with JP-4 fuel. A combustor incorporating both extended length and variable geometry was evaluated and the performance and emission results are reported. These results are compared on the basis of a helicopter duty cycle and the EPA 1979 turboprop regulation landing take off cycle. The 1979 EPA emission regulations for P2 class engines can be met with the extended length variable geometry combustor on the T63 turboprop engine.« less

  16. Simulation of thermomechanical fatigue in solder joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, H.E.; Porter, V.L.; Fye, R.M.

    1997-12-31

    Thermomechanical fatigue (TMF) is a very complex phenomenon in electronic component systems and has been identified as one prominent degradation mechanism for surface mount solder joints in the stockpile. In order to precisely predict the TMF-related effects on the reliability of electronic components in weapons, a multi-level simulation methodology is being developed at Sandia National Laboratories. This methodology links simulation codes of continuum mechanics (JAS3D), microstructural mechanics (GLAD), and microstructural evolution (PARGRAIN) to treat the disparate length scales that exist between the macroscopic response of the component and the microstructural changes occurring in its constituent materials. JAS3D is used tomore » predict strain/temperature distributions in the component due to environmental variable fluctuations. GLAD identifies damage initiation and accumulation in detail based on the spatial information provided by JAS3D. PARGRAIN simulates the changes of material microstructure, such as the heterogeneous coarsening in Sn-Pb solder, when the component`s service environment varies.« less

  17. First PIC simulations modeling the interaction of ultra-intense lasers with sub-micron, liquid crystal targets

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Poole, Patrick; Willis, Christopher; Andereck, David; Schumacher, Douglass

    2014-10-01

    We recently introduced liquid crystal films as on-demand, variable thickness (50-5000 nanometers), low cost targets for intense laser experiments. Here we present the first particle-in-cell (PIC) simulations of short pulse laser excitation of liquid crystal targets treating Scarlet (OSU) class lasers using the PIC code LSP. In order to accurately model the target evolution, a low starting temperature and field ionization model are employed. This is essential as large starting temperatures, often used to achieve large Debye lengths, lead to expansion of the target causing significant reduction of the target density before the laser pulse can interact. We also present an investigation of the modification of laser pulses by very thin targets. This work was supported by the DARPA PULSE program through a grant from ARMDEC, by the US Department of Energy under Contract No. DE-NA0001976, and allocations of computing time from the Ohio Supercomputing Center.

  18. Estimation of stature from sternal lengths. A correlation meta-analysis.

    PubMed

    Yammine, Kaissar; Assi, Chahine

    2017-01-01

    Methods based on the positive linear relationship existing between stature and long bones are most commonly used to estimate living stature in forensic anthropology. The length of the sternum and its parts has been advanced as a plausible alternative to estimate stature when such long bones are missing or damaged. This meta-analysis aims to quantify evidence on the correlation between the sternum/sternal parts length and stature. Nine studies were included with 1118 sternal bones. Analyses showed that the length of the meso-sternum (manubrium + body) yielded the best correlation with stature; 53.5% and 55.42% for men and women, respectively. The second best variable is the total sternal length with correlations of 44.3% and 55% for men and women, respectively. Subgroup analysis of autopsy studies demonstrated even a higher correlation of 58.2% for the meso-sternal length. Manubrium and body lengths showed the least correlation values. Except for the body length, females exhibit a better correlation than man between all other sternal lengths and stature. While the meso-sternal length is found to be the most correlated variable with stature, all sternal lengths are to be considered with caution when estimating stature. The relatively low values of the weighted correlation results should raise the question of reliability and limit the use of sternal length when long bones are available. Future research using larger samples from different populations and taking into account the fusion status of the sternum are needed.

  19. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  20. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

Top